Title
stringlengths
18
136
Content
stringlengths
293
255k
Category
stringclasses
1 value
Amazon_Aurora_Migration_Handbook
This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 1 Amazon Aurora Migration Handbook July 2020 This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 2 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 3 Contents Introduction 5 Database Migration Considerations 6 Migration Phases 7 Features and Compatibility 7 Performance 8 Cost 9 Availability and Durability 9 Planning and Testing a Database Migration 11 Homogeneous Migrations 11 Summary of Available Migration Methods 12 Migrating Large Databases to Amazon Aurora 15 Partition and Shard Consolidation on Amazon Aurora 16 MySQL and MySQL compatible Migration Options at a Glance 17 Migrating from Amazon RDS for MySQL 18 Migrating from MySQL Compatible Databases 23 Heterogeneous Migrations 26 Schema Migration 27 Data Migration 28 Example Migration Scenarios 28 SelfManaged Homogeneous Migrations 28 Multi Threaded Migration Using mydumper and myloader 39 Heterogeneous Migrations 45 Testing and Cutover 46 Migration Testing 46 This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 4 Cutover 47 Troubleshooting 49 Troubleshooting MySQL Specific Issues 49 Conclusion 54 Contributors 55 Further Reading 56 Document Revisions 56 This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 5 Abstract This paper outlines the best practices for planning executing and troubleshooting database migrations from MySQL compatible and non MySQL compatible database products to Amazon Aurora It also teaches Amazon Aurora database administrators how to diagnose and troubleshoot common migration and replication erro rs Introduc tion For decades traditional relational databases have been the primary choice for data storage and persistence These database systems continue to rely on monolithic architectures and were not designed to take advantage of cloud infrastructure These monolithic architectures present many challenges particularly in areas such as cost flexibility and availability In order to address these challenges AWS redesigned relational database for the cloud infrastructure and introduced Amazon Aurora Amazon Aurora is a MySQL compatible relational database engine that combines the speed availability and security of high end commercial databases with the simplicity and cost effectiveness of open source databases Aurora provides up to five times better performance than MySQL and comparable performance of high end commercial databases Amazon Aurora is priced at one tenth the cost of commercial engines Amazon Aurora is available through the Amazon Relational Database Service (Amazon RDS) platform Like other Amazon RDS databases Aurora is a fully managed database service With the Amazon RDS platform most database management tasks such as hardware provisioning softwa re patching setup configuration monitoring and backup are completely automated Amazon Aurora is built for mission critical workloads and is highly available by default An Aurora database cluster spans multiple Availability Zones (AZs) in a region providing out ofthebox durability and fault tolerance to your data across physical data centers An Availability Zone is composed of one or more highly available data centers operated by Amazon AZs are isolated from each other and are connected through lo w latency links Each segment of your database volume is replicated six times across these AZs This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 6 Aurora cluster volumes automatically grow as the amount of data in your database increases with no performance or availability impact so there is no need for estimating and provisioning large amount of database storage ahead of time An Aurora cluster volume can grow to a maximum size of 64 terabytes (TB) You are only charged for the space that you use in an Aurora cluster volume Aurora's automated backup capability supports point intime recovery of your data enabling you to restore your database to any second during your retention period up to the last five minutes Automated backups are stored in Amazon Simpl e Storage Service (Amazon S3) which is designed for 99999999999% durability Amazon Aurora backups are automatic incremental and continuous and have no impact on database performance For applications that need read only replicas you can create up to 15 Aurora Replicas per Aurora database with very low replica lag These replicas share the same underlying storage as the source instance lowering costs and avoiding the need to perform writes at the replica nodes Amazon Aurora is highly secure and all ows you to encrypt your databases using keys that you create and control through AWS Key Management Service (AWS KMS) On a database instance running with Amazon Aurora encryption data stored at rest in the underlying storage is encrypted as are the auto mated backups snapshots and replicas in the same cluster Amazon Aurora uses SSL (AES 256) to secure data in transit For a complete list of Aurora features see Amazon Aurora Given the rich feature se t and cost effectiveness of Amazon Aurora it is increasingly viewed as the go to database for mission critical applications Database Migration Considerations A database represents a critical component in the architecture of most applications Migrating t he database to a new platform is a significant event in an application’s lifecycle and may have an impact on application functionality performance and reliability You should take a few important considerations into account before embarking on your first migration project to Amazon Aurora Migrations are among the most time consuming and critical tasks handled by database administrators Although the task has become easier with the advent of managed This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 7 migration services such as AWS Database Migration Service large scale database migrations still require adequate planning and execution to meet strict compatibility and performance requirements Migration Phases Because database migrations tend to be complex we adv ocate taking a phased iterative approach Figure 1 Migration phases This paper examines the following major contributors to the success of every database migration project: • Factors that justify the migration to Amazon Aurora such as compatibility performance cost and high availability and durability • Best practices for choosing the optimal migration method • Best practices for planning and executing a migration • Migration troubleshooting hints This section discusses imp ortant considerations that apply to most database migration projects For an extended discussion of related topics see the Amazon Web Services (AWS) whitepaper Migrating Your Databases to Amazon Aurora Features and Compatibility Although most applications can be architected to work with many relational database engines you should make sure that your application works with Amazon Aurora Amazon Aurora is designed to be wire compatible with MySQL 5 55657 and 80 Therefore most of the code applications driver s and tools that are used today with MySQL databases can be used with Aurora with little or no change However certain MySQL features like the MyISAM storage engine are not available with Amazon Aurora Also due to the managed nature of the Aurora ser vice SSH This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 8 access to database nodes is restricted which may affect your ability to install third party tools or plugins on the database host For more details see Aurora on Amazon RDS in the Amazon Relational Database Service (Amazon RDS) User Guide Performance Performance is often the key motivation behind database migrations However deploying your database on Amazon Aurora can be beneficial even if your applications don’t have performance issues For example Amazon Aurora scalability features can greatly reduce the amount of engineering effort that is required to prepare your database platform for future traffic growth You should include benchmarks and performance evaluations in every migration project Therefore many successful database migration projects start with performance evaluations of the new database platform Although the RDS Aurora Performance Assessment Benchmarking paper gives you a decent idea of overall database performance these benchmarks do not emulate the data access patterns of your applications For more useful results test the database performance for time sensitive workloads by running your queries (or subset of your queries) on the new platform directly Consider these strategies : • If your current database is MySQL migrate to Amazon Aurora with downtime and performance test your database with a test or staging version of your application or by replaying the production workload • If you are on a non MySQL compliant engine you can selectively copy the busiest tables to Amazon Aurora and test your queries for t hose tables This gives you a good starting point Of course testing after complete data migration will provide a full picture of real world performance of your application on the new platform Amazon Aurora delivers comparable performance with commercia l engines and significant improvement over MySQL performance It does this by tightly integrating the database engine with an SSD based virtualized storage layer designed for database workloads This reduces writes to the storage system minimizes lock con tention and This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 9 eliminates delays created by database process threads Our tests with SysBench on r38xlarge instances show that Amazon Aurora delivers over 585000 reads per second and 107000 writes per second five times higher than MySQL running the same benchmark on the same hardware One area where Amazon Aurora significantly improves upon traditional MySQL is highly concurrent workloads In order to maximize your workload’s throughput on Amazon Aurora we recommend architecting your applications to driv e a large number of concurrent queries Cost Amazon Aurora provides consistent high performance together with the security availability and reliability of a commercial database at one tenth the cost Owning and running databases come with associated cost s Before planning a database migration an analysis of the total cost of ownership (TCO ) of the new database platform is imperative Migration to a new database platform should ideally lower the total cost of ownership while providing your applications with similar or better features If you are running an open source database engine (MySQL Postgres) your costs are largely related to hardware server management and database management activities However if you are running a commercial database engine (Oracle SQL Server DB2 etc) a significant portion of your cost is database licensing Amazon Aurora can even be more cost efficient than open source databases because its high scalability helps you reduce the number of database instances that are required to handle the same workload For more details see the Amazon RDS for Aurora Pricing page Availability and Durability High availability and disaster recovery are important considerations for databases Your application may already have very strict recovery time objective (RTO) and recovery point objective (RPO) requirements Amazon Aurora can help you meet or exceed your availability goals by having the following components: This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 10 1 Read replicas : Increase read throughput to support high volume application requests by creating up to 15 database Aurora replicas Amazon Aurora Replicas share the same underlying storage as the source inst ance lowering costs and avoiding the need to perform writes at the replica nodes This frees up more processing power to serve read requests and reduces the replica lag time often down to single digit milliseconds Aurora provides a reader endpoint so th e application can connect without having to keep track of replicas as they are added and removed Aurora also supports auto scaling where it automatically adds and removes replicas in response to changes in performance metrics that you specify Aurora sup ports cross region read replicas Cross region replicas provide fast local reads to your users and each region can have an additional 15 Aurora replicas to further scale local reads 2 Global Database : You can choose between Global Database which provides the best replication performance and traditional binlog based replication You can also set up your own binlog replication with external MySQL databases Amazon Aurora Global Database is de signed for globally distributed applications allowing a single Amazon Aurora database to span multiple AWS regions It replicates your data with no impact on database performance enables fast local reads with low latency in each region and provides disa ster recovery from region wide outages 3 Multi AZ: Aurora stores copies of the data in a DB cluster across multiple Availability Zones in a single AWS Region regardless of whether the instances in the DB cluster span multiple Availability Zones For more i nformation on Aurora see Managing an Amazon Aurora DB Cluster When data is written to the primary DB instance Aurora synchronously replicates the data across Availability Zones to six storage nodes associated with your cluster volume Doing so provides data redundancy eliminates I/O freezes and minimizes latency spikes during system backups Running a DB instance with high availability can enhance availability during planned system maintenance and help protect your databases against failure and Availability Zone disruption For more information about durability and availability features in Amazon Aurora see Aurora on Amazon RDS in the Amazon RDS User Guide This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 11 Planning and Testing a Database Migration After you determine that Amazon Aurora is the right fit for your application the next step is to decide on a migration approach and create a database migration plan Here are the suggested high level steps: 1 Review the available migration techniques described in this document and choose one that satisfies your requirements 2 Prepare a migration plan in the form of a step bystep checklist A checklist ensures that all migration steps are executed in the correct order and that the migration process flow can be controlled (eg suspended or resumed) without the risk of important steps be ing missed 3 Prepare a shadow checklist with rollback procedures Ideally you should be able to roll the migration back to a known consistent state from any point in the migration checklist 4 Use the checklist to perform a test migration and take note of the time required to complete each step If any missing steps are identified add them to the checklist If any issues are identified during the test migration address them and rerun the test migration 5 Test all rollback procedures If any rollback proced ure has not been tested successfully assume that it will not work 6 After you complete the test migration and become fully comfortable with the migration plan execute the migration Homogeneous Migrations Amazon Aurora was designed as a drop in replacement for MySQL 56 It offers a wide range of options for homogeneous migrations (eg migrations from MySQL and MySQL compatible databases) This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 12 Summary of Available Migration Methods This section lists common migration sources and the migration metho ds available to them in order of preference Detailed descriptions step bystep instructions and tips for advanced migration scenarios are available in subsequent sections Common method is widely adopted is built aurora read replica asynchronized wit h source master RDS or self managed MySQL databases Figure 1 Common migration sources and migration methods for Amazon Aurora Amazon RDS Snapshot Migration Compatible sources: • Amazon RDS for MySQL 56 • Amazon RDS for MySQL 51 and 55 (after upgrading to RDS for MySQL 56) Feature highlights: • Managed point andclick service available through the AWS Management Console • Best migration speed and ease of use of all migration methods • Can be used with binary log replication for near zero migration downtime For details see Migrating Data from a MySQL DB Instance to an Amazon Aurora DB Cluster in the Amazon RDS User Guide This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 13 Percona XtraBackup Compatible sources and limitations : • Onpremises or self managed MySQL 56 in EC2 can be migrated zero downtime migration • You can’t restore into an existing RDS instance using this method • The total size is limited to 6 TB • User accounts functions and stored procedures are not imported automatically Feature highlights: • Managed backup ingestion from Percona XtraBackup files stored in an Amazon Simple Storage Servi ce (Amazon S3) bucket • High performance • Can be used with binary log replication for near zero migration downtime For details see Migrating Data from MySQL by using an Amazon S3 bucket in the Amazon RDS User Guide SelfManaged Export/Import Compatible sources: • MySQL and MySQL compatible databases such as MySQL MariaDB or Percona Server including managed servers such as Amazon RDS for MySQL or MariaDB • NonMySQL compatible databases DMS Migration Compatible sources: • MySQL compatible and non MySQL compatible databases Feature highlights: • Supports heterogeneous and homogenous migrations This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 14 • Managed point andclick data migration service available through the AWS Management Console • Schemas must be migrated separately • Supports CDC replication for near zero migration downtime For details see What Is AWS Database Migration Service? in the AWS DMS User Guide For a heterogeneous migration where you are migrating from a database engine other than MySQL to a MySQL datab ase AWS DMS is almost always the best migration tool to use But for homogeneous migration where you are migrating from a MySQL database to a MySQL database native tools can be more effective Using Any MySQL Compatible Database as a Source for AWS DMS: Before you begin to work with a MySQL database as a source for AWS DMS make sure that you the following prerequisites These prerequisites apply to either self managed or Amazon managed sources You must have an account for AWS DMS that has the Replicati on Admin Role The role needs the following privileges: • Replication Client: This privilege is required for change data capture (CDC) tasks only In other words full loadonly tasks don’t require this privilege • Replication Slave: This privilege is required for change data capture (CDC) tasks only In other words full loadonly tasks don’t require this privilege • Super: This privilege is required only in MySQL versions before 566 DMS highlights for non MySQL compatible sources: • Requires manual schema conversion from source database format into MySQL compatible format • Data migration can be performed manually using a universal data format such as comma separated values (CSV) This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 15 • Change data capture (CDC) replication might be possible with third party tool s for near zero migration downtime Migrating Large Databases to Amazon Aurora Migration of large datasets presents unique challenges in every database migration project Many successful large database migration projects use a combination of the following strategies: • Migration with continuous replication: Large databases typically have extended downtime requirements while moving data from source to target To reduce the downtime you can first load baseline data from source to target and then enable replica tion (using MySQL native tools AWS DMS or third party tools) for changes to catch up • Copy static tables first: If your database relies on large static tables with reference data you may migrate these large tables to the target database before migratin g your active dataset You can leverage AWS DMS to copy tables selectively or export and import these tables manually • Multiphase migration: Migration of large database with thousands of tables can be broken down into multiple phases For example you may move a set of tables with no cross joins queries every weekend until the source database is fully migrated to the target database Note that in order to achieve this you need to make changes in your application to connect to two databases simultaneously while your dataset is on two distinct nodes Although this is not a common migration pattern this is an option nonetheless • Database clean up: Many large databases contain data and tables that remain unused In many cases developers and DBAs keep backup copies of tables in the same database or they just simply forget to drop unused tables Whatever the reason a database migration project p rovides an opportunity to clean up the existing database before the migration If some tables are not being used you might either drop them or archive them to another database You might also delete old data from large tables or archive that data to flat files This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 16 Partition and Shard Consolidation on Amazon Aurora If you are running multiple shards or functional partitions of your database to achieve high performance you have an opportunity to consolidate these partitions or shards on a single Aurora databa se A single Amazon Aurora instance can scale up to 64 TB supports thousands of tables and supports a significantly higher number of reads and writes than a standard MySQL database Consolidating these partitions on a single Aurora instance not only redu ces the total cost of ownership and simplify database management but it also significantly improves performance of cross partition queries • Functional partitions : Functional partitioning means dedicating different nodes to different tasks For example i n an e commerce application you might have one database node serving product catalog data and another database node capturing and processing orders As a result these partitions usually have distinct nonoverlapping schemas o Consolidation strateg y: Migrate each functional partition as a distinct schema to your target Aurora instance If your source database is MySQL compliant use native MySQL tools to migrate the schema and then use AWS DMS to migrate the data either one time or continuously using replication If your source database is non MySQL complaint use AWS Schema Conversion Tool to migrate the schemas to Aurora and use AWS DMS for one time load or continuous replication • Data shards : If you have the same schema with distinct sets of data acros s multiple nodes you are leveraging database sharding For example a high traffic blogging service may shard user activity and data across multiple database shards while keeping the same table schema This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 17 o Consolidation strategy : Since all shards share the sa me database schema you only need to create the target schema once If you are using a MySQL compliant database use native tools to migrate the database schema to Aurora If you are using a non MySQL database use AWS Schema Conversion Tool to migrate the database schema to Aurora Once the database schema has been migrated it is best to stop writes to the database shards and use native tools or an AWS DMS one time data load to migrate an individual shard to Aurora If writes to the application cannot be stopped for an extended period you might still use AWS DMS with replication but only after proper planning and testing MySQL and MySQL compatible Migration Options at a Glance Source Database Type Migration with Downtime Near zero Downtime Migration Amazon RDS MySQL Option 1: RDS snapshot migration Option 2: Manual migration using native tools* Option 3: Schema migration using native tools and data load using AWS DMS Option 1: Migration using native tools + binlog replication Option 2: RDS snapshot migration + binlog replication Option 3: Schema migration using native tools + AWS DMS for data movement MySQL Amazon EC2 or onpremises Option 1: Schema migration with native tools + AWS DMS for data load Option 1: Schema migration using native tools + A WS DMS to move data Oracle/SQL server Option 1: AWS Schema Conversion Tool + AWS DMS (recommended) Option 2: Manual or third party tool for schema conversion + manual or thirdparty data load in target Option 1: AWS Schema Conversion Tool + AWS DMS (recommended) Option 2: Manual or third party tool for schema conversion This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 18 Migrating from Amazon RDS for MySQL If you are migrating from an RDS MySQL 56 database (DB) instance the recommended approach is to use the snapshot migration feature Snapshot m igration is a fully managed point andclick feature that is available through the AWS Management Console You can use it to migrate an RDS MySQL 56 DB instance snapshot into a new Aurora DB cluster It is the fastest and easiest to use of all the migrati on methods described in this document For more information about the snapshot migration feature see Migrating Data to an Amazon Aurora DB Cluster in the Amazon RDS User Guide This section provides ideas for projects that use the snapshot migration feature The liststyle layout in our example instructions can help you prepare your own migration checklist Estimating Space Requirements for Snapshot Migration When you migrate a snapshot of a MySQL DB instance to an Aurora DB cluster Aurora uses an Am azon Elastic Block Store (Amazon EBS) volume to format the data from the snapshot before migrating it There are some cases where additional space is needed to format the data for migration The two features that can potentially cause space issues during m igration are MyISAM tables and using the ROW_FORMAT=COMPRESSED option If you are not using either of these features in your source database then you can skip this section because you should not have space issues During migration MyISAM tables are conve rted to InnoDB and any compressed tables are uncompressed Consequently there must be adequate room for the additional copies of any such tables The size of the migration volume is based on the allocated size of the source MySQL database that the snapsho t was made from Therefore if you have MyISAM or compressed tables that make up a small percentage of the overall database size and there is available space in the original database then migration should succeed without encountering any space issues How ever if the original database would not have enough room to store a copy of converted MyISAM tables as well as another (uncompressed) copy of compressed tables then the migration volume will not be big enough In this situation you would need to modify the source Amazon RDS MySQL This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 19 database to increase the database size allocation to make room for the additional copies of these tables take a new snapshot of the database and then migrate the new snapshot When migrating data into your DB cluster observe the following guidelines and limitations: • Although Amazon Aurora supports up to 64 TB of storage the process of migrating a snapshot into an Aurora DB cluster is limited by the size of the Amazon EBS volume of the snapshot and therefore is limited to a m aximum size of 6 TB Non MyISAM tables in the source database can be up to 6 TB in size However due to additional space requirements during conversion make sure that none of the MyISAM and compressed tables being migrated from your MySQL DB instance exc eed 3 TB in size For more information see Migrating Data from an Amazon RDS MySQL DB Instance to an Amazon Aurora MySQL DB Cluster You might want to modify your d atabase schema (convert MyISAM tables to InnoDB and remove ROW_FORMAT=COMPRESSED ) prior to migrating it into Amazon Aurora This can be helpful in the following cases: • You want to speed up the migration process • You are unsure of how much space you need t o provision • You have attempted to migrate your data and the migration has failed due to a lack of provisioned space Make sure that you are not making these changes in your production Amazon RDS MySQL database but rather on a database instance that was restored from your production snapshot For more details on doing this see Reducing the Amount of Space Required to Migrate Data into Amazon Aurora in the Amazon RDS User Guide The naming conventions used in this section are as follows: • Source RDS DB instance refers to the RDS MySQL 56 DB instance that you are migrating from • Target Aurora DB cluster refers to the Aurora DB cluster that you are migrating to This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 20 Migrating with Downtime When migration downtime is acceptable you can use the following high level procedure to migrate an RDS MySQL 56 DB instance to Amazon Aurora: 1 Stop all write activity against the source RDS DB instance Database downtime begins here 2 Take a snapshot of the source RDS DB instance 3 Wait until the snapshot shows as Available in the AWS Management Console 4 Use the AWS Management Console to migrate the snapshot to a new Aurora DB cluster For instructions see Migra ting Data to an Amazon Aurora DB Cluster in the Amazon RDS User Guide 5 Wait until the snapshot migration finishes and the target Aurora DB cluster enters the Available state The time to migrate a snapshot primarily depends on the size of the database You can determine it ahead of the production migration by running a test migration 6 Configure applications to connect to the newly created target Aurora DB cluster instead of the source RDS DB instance 7 Resume write activity against the target Aurora DB cluster Database downtime ends here Migrating with Near Zero Downtime If prolonged migration downtime is not acceptable you can perform a near zero downtime migration through a combination of snapshot migration and binary log replication Perform the high level procedure as follows: 1 On the source RDS DB instance ensure that a utomated backups are enabled 2 Create a Read Replica of the source RDS DB instance 3 After you create the Read Replica manually stop replication and obtain binary log coordinates 4 Take a snapshot of the Read Replica This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 21 5 Use the AWS Management Console to migrat e the Read Replica snapshot to a new Aurora DB cluster 6 Wait until snapshot migration finishes and the target Aurora DB cluster enters the Available state 7 On the target Aurora DB cluster configure binary log replication from the source RDS DB instance using the binary log coordinates that you obtained in step 3 8 Wait for the replication to catch up that is for the replication lag to reach zero 9 Begin cut over by stopping all write activity against the source RDS DB instance Application downt ime begins here 10 Verify that there is no outstanding replication lag and then configure applications to connect to the newly created target Aurora DB cluster instead of the source RDS DB instance 11 Complete cut over by resuming write activity Application downtime ends here 12 Terminate replication between the source RDS DB instance and the target Aurora DB cluster For a detailed description of this procedure see Replication Between Aurora and MySQL or Between Aurora and Another Aurora DB Cluster in the Amazon RDS Us er Guide If you don’t want to set up replication manually you can also create an Aurora Read Replica from a source RDS MySQL 56 DB instance by using the RDS Management Console The RDS automation does the following: 1 Creates a snapshot of the source RDS DB instance 2 Migrates the snapshot to a new Aurora DB cluster 3 Establishes binary log replication between the source RDS DB instance and the target Aurora DB cluster After replication is established you can complete the cut over steps as described previously This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 22 Migrating from Amazon RDS for MySQL Engine Versions Other than 56 Direct snapshot migration is only supported for RDS MySQL 56 DB instance snapshots You can migrate RDS MySQL DB instances that are running other engine versions by u sing the following procedures RDS for MySQL 51 and 55 Follow these steps to migrate RDS MySQL 51 or 55 DB instances to Amazon Aurora: 1 Upgrade the RDS MySQL 51 or 55 DB instance to MySQL 56 • You can upgrade RDS MySQL 55 DB instances directly to MySQL 56 • You must upgrade RDS MySQL 51 DB instances to MySQL 55 first and then to MySQL 56 2 After you upgrade the instance to MySQL 56 test your applications against the upgraded database and address any compatibility or performance co ncerns 3 After your application passes the compatibility and performance tests against MySQL 56 migrate the RDS MySQL 56 DB instance to Amazon Aurora Depending on your requirements choose the Migrating with Downtime or Migrating with Near Zero Downtime procedures described earlier For more information about upgrading RDS MySQL engine versions see Upgrading the MySQL DB Engine in the Amazon RDS User Guide RDS for MySQL 57 For migrations from RDS MySQL 57 DB instances the snapshot migration approach is not supported because the database engine version ca n’t be downgraded to MySQL 56 In this case we recommend a manual dump andimport procedure for migrating MySQL compatible databases described later in this whitepaper Such a procedure may be slower than snapshot migration but you can still perform it with near zero downtime using binary log replication This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 23 Migrating from MySQL Compatible Databases Moving to Amazon Aurora is still a relatively simple process if you are migrating from an RDS MariaDB instance an RDS MySQL 57 DB instance or a se lf managed MySQL compatible database such as MySQL MariaDB or Percona Server running on Amazon Elastic Compute Cloud (Amazon EC2) or on premises There are many techniques you can use to migrate your MySQL compatible database workload to Amazon Aurora This section describes various migration options to help you choose the most optimal solution for your use case Percona XtraBackup Amazon Aurora supports migration from Percona XtraBackup files that are stored in an Amazon S3 bucket Migrating from binar y backup files can be significantly faster than migrating from logical schema and data dumps using tools like mysqldump Logical imports work by executing SQL commands to re create the schema and data from your source database which involves considerable processing overhead By comparison you can use a more efficient binary ingestion method to ingest Percona XtraBackup files This migration method is compatible with source servers using MySQL versions and 56 Migrating from Percona XtraBackup files invol ves three steps: 1 Use the innobackupex tool to create a backup of the source database 2 Upload backup files to an Amazon S3 bucket 3 Restore backup files through the AWS Management Console For details and step bystep instructions see Migrating data from MySQL by using an Amazon S3 Bucket in the Amazon RDS User Guide SelfManaged Export/Import You can use a variety of export/import tools to migrate your data and schema to Amazon Aurora The tools can be described as “MySQL native” because they are either part of a MySQL project or were designed specifically for MySQL compatible databases This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 24 Examples of native migration tools include the following: 1 MySQL utilities such as mysqldump mysqlimport and mysql command line client 2 Third party utilities such as mydumper and myloader For details see this mydumper project page 3 Builtin MySQL commands such as SELECT INTO OUTFILE and LOAD DATA INFILE Native tools are a great option for power users or database administrators who want to maintain full control over the migration process Self managed migrations involve more steps and are typically slower than RDS snapshot or Percona XtraBackup migrations but they offer the best compatibility and flexibility For an in depth discussion of the best practices for self managed migrations see the AWS whitepaper Best Practices for Migrating MySQ L Databases to Amazon Aurora You can execute a self managed migration with downtime (without replication) or with nearzero downt ime (with binary log replication) SelfManaged Migration with Downtime The high level procedure for migrating to Amazon Aurora from a MySQL compatible database is as follows: 1 Stop all write activity against the source database Application downtime begin s here 2 Perform a schema and data dump from the source database 3 Import the dump into the target Aurora DB cluster 4 Configure applications to connect to the newly created target Aurora DB cluster instead of the source database 5 Resume write activity Appli cation downtime ends here For an in depth discussion of performance best practices for self managed migrations see the AWS whitepaper Best Practices for Migrating MySQL Databases to Amazon Aurora This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 25 SelfManaged Migration with Near Zero Downtime The following is the high level procedure for near zero downtime migration into Amazon Aurora from a MySQL compatible database: 1 On the source database enable binary logging and ensure that binary log files are retained for at least the amount of time that is required t o complete the remaining migration steps 2 Perform a schema and data export from the source database Make sure that the export metadata contains binary log coordinates that are required to establish replication at a later time 3 Import the dump into the tar get Aurora DB cluster 4 On the target Aurora DB cluster configure binary log replication from the source database using the binary log coordinates that you obtained in step 2 5 Wait for the replication to catch up that is for the replication lag to reach zero 6 Stop all write activity against the source database instance Application downtime begins here 7 Double check that there is no outstanding replication lag Then configure applications to connect to the newly created target Aurora DB cluster inst ead of the source database 8 Resume write activity Application downtime ends here 9 Terminate replication between the source database and the target Aurora DB cluster For an in depth discussion of performance best practices of self managed migrations see the AWS whitepaper Best Practices for Mig rating MySQL Databases to Amazon Aurora AWS Database Migration Service AWS Database Migration Service is a managed database migra tion service that is available through the AWS Management Console It can perform a range of tasks from simple migrations with downtime to near zero downtime migrations using CDC replication This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 26 AWS Database Migration Service may be the preferred option if y our source database can’t be migrated using methods described previously such as the RDS MySQL 56 DB snapshot migration Percona XtraBackup migration or native export/import tools AWS Database Migration Service might also be advantageous if your migrat ion project requires advanced data transformations such as the following : • Remapping schema or table names • Advanced data filtering • Migrating and replicating multiple database servers into a single Aurora DB cluster Compared to the migration methods describe d previously AWS DMS carries certain limitations: • It does not migrate secondary schema objects such as indexes foreign key definitions triggers or stored procedures Such objects must be migrated or created manually prior to data migration • The DMS CDC replication uses plain SQL statements from binlog to apply data changes in the target database Therefore it might be slower and more resource intensive than the native master/slave binary log replication in MySQL For step bystep instructions on how to migrate your database using AWS DMS see the AWS whitepaper Migrating Your Databases to Amazon Aurora Heterogeneous Migrations If you a re migrating a non MySQL compatible database to Amazon Aurora several options can help you complete the project quickly and easily A heterogeneous migration project can be split into two phases: 1 Schema migration to review and convert the source schema objects (eg tables procedures and triggers) into a MySQL compatible representation 2 Data migration to populate the newly created schema with data contained in the source database Optionally you can use a CDC replication for near zero downtime migratio n This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 27 Schema Migration You must convert database objects such as tables views functions and stored procedures to a MySQL 56 compatible format before you can use them with Amazon Aurora This section describes two main options for converting schema objects Whichever migration method you choose always make sure that the converted objects are not only compatible with Aurora but also follow MySQL’s best practices for schema design AWS Schema Conversion Tool The AWS Schema Conversion Tool (AWS SCT) can great ly reduce the engineering effort associated with migrations from Oracle Microsoft SQL Server Sybase DB2 Azure SQL Database Terradata Greenplum Vertica Cassandra and PostgreSQL etc AWS SCT can automatically convert the source database schema and a majority of the custom code including views stored procedures and functions to a format compatible with Amazon Aurora Any code that can’t be automatically converted is clearly marked so that it can be processed manually For more information see the AWS Schema Conversion Tool User Guide For step by step instructions on how to convert a non MySQL compatible schema using the AWS Schema Conversion Tool see t he AWS whitepaper Migrating Your Databases to Amazon Aurora Manual Schema Migration If your source database is not in the scope of SCT comp atible databases you can either manually rewrite your database object definitions or use available third party tools to migrate schema to a format compatible with Amazon Aurora Many applications use data access layers that abstract schema design from business application code In such cases you can consider redesigning your schema objects specifically for Amazon Aurora and adapting the data access layer to the new schema This might require a greater upfront engineering effort but it allows the new s chema to incorporate all the best practices for performance and scalability This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 28 Data Migration After the database objects are successfully converted and migrated to Amazon Aurora it’s time to migrate the data itself The task of moving data from a non MySQL compatible database to Amazon Aurora is best done using AWS DMS AWS DMS supports initial data migration as well as CDC replication After the migration task starts AWS DMS manages all the complexities of the process including data type transformations compression and parallel data transfer The CDC functionality automatically replicates any changes that are made to the source database during the migration process For more information see the AWS Database Migration Service User Guide For step bystep instructions on how to migrate data from a non MySQL compatible database into an Amazon Aurora cluster using AWS DMS see the AWS whitepaper Migrating Your Databases to Amazon Aurora Example Migration Scenarios There are several approaches for performing both self managed homogeneo us migration and heterogeneous migrations SelfManaged Homogeneous Migrations This section provides examples of migration scenarios from self managed MySQL compatible databases to Amazon Aurora For an in depth discussion of homogeneous migration best pra ctices see the AWS whitepaper Best Practices for Migrating MySQL Databases to Amazon Aurora Note: If you are migrating from an Amazon RDS MySQL DB instance you can use the RDS snapshot migration feature instead of doing a self managed migration See the Migrating from Amazon RDS for MySQL section for more details This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 29 Migrating Using Percona XtraBackup One option for migrating data from MySQL to Amazon Aurora is to use the Percona XtraBackup utility For more information about usin g Percona Xtrabackup utility see Migrating Data from an External MySQL Database in the Amazon RDS User Guide Approach This scenario uses the Percona XtraBackup utility to take a binary backup of the source MySQL database The backup files are then uploaded to an Amazon S3 bucket and restored into a new Amazon Aurora DB cluster When to Use You can adopt this approach for small to large scale migrations when the following conditions are met: • The source database is a MySQL 55 or 56 database • You have administrative system level access to the source database • You are migrating database servers in a 1 to1 fashion: one source MySQL server becomes one new Aurora DB cluster When to Consider Other Options This approach is not currently supported in the following scenarios • Migrating into existing Aurora DB clusters • Migrating multiple source MySQL servers into a single Aurora DB cluster Examples For a step bystep example see Migrating Data from an External MySQL Database in the Amazon RDS User Guide OneStep Migration Using mysqldump Another migration option uses the mysqldump utility to migrate data from MySQL to Amazon Aurora This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 30 Approach This scenario uses the mysqldump utility to export schema and data definitions from the source server and import them into the target Auro ra DB cluster in a single step without creating any intermediate dump files When to Use You can adopt this approach for many small scale migrations when the following conditions are met: • The data set is very small (up to 1 2 GB) • The network connection between source and target databases is fast and stable • Migration performance is not critically important and the cost of re trying the migration is very low • There is no need to do any intermediate schema or data transformations When to Cons ider Other Options This approach might not be an optimal choice if any of the following conditions are true • You are migrating from an RDS MySQL DB instance or a self managed MySQL 55 or 56 database In that case you might get better results with snapsho t migration or Percona XtraBackup respectively For more • details see the Migrating from Amazon RDS for MySQL and Percona XtraBackup sections • It is impossible to establish a network connection from a single client instance to source and target databases due to network architecture or security considerations • The network connection between source and target databases is unstable or very slow • The data set is larger than 10 GB • Migration performance is critically important • An intermediate dump file is required in order to perform schema or data manipulations before you can import the schema/data This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 31 Notes For the sake of simplicity this scenario assumes the following: 1 Migration commands are executed from a client instance running a Linux operating system 2 The source server is a self managed MySQL database (eg running on Amazon EC2 or on premises) that is configured to allow connections from the client instance 3 The target Aurora DB cluster already exists and is configured to allow connections from the client instance If you don’t yet have an Aurora DB cluster review the stepbystep cluster launch instructions in the Amazon RDS User Guide 17 4 Export from the source database is performed using a privileged super user MySQL ac count For simplicity this scenario assumes that the user holds all permissions available in MySQL 5 Import into Amazon Aurora is performed using the Aurora master user account that is the account whose name and password were specified during the cluster launch process Examples The following command when filled with the source and target server and user information migrates data and all objects in the named schema(s) between the source and t arget servers mysqldump host=<source_server_address> \ user=<source_user> \ password=<source_user_password> \ databases <schema(s)> \ singletransaction \ compress | mysql host=<target_cluster_endpoint> \ user=<target_user> \ password=<target_user_password> This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 32 Descriptions of the options and option v alues for the mysqldump command are as follows: • <source_server_address> : DNS name or IP address of the source server • <source_user> : MySQL user account name on the source server • <source_user_password> : MySQL user account password on the source server • <schema(s)> : One or more schema names • <target_cluster_endpoint> : Cluster DNS endpoint of the target Aurora cluster • <target_user> : Aurora master user name • <target_user_password> : Aurora master user password • single transaction : Enforces a consi stent dump from the source database Can be skipped if the source database is not receiving any write traffic • compress : Enables network data compression See the mysqldump docume ntation for more details Example: mysqldump host=source mysqlexamplecom \ user=mysql_admin_user \ password=mysql_user_password \ databases schema1 \ singletransaction \ compress | mysql host=auroracluster xxxxxamazonawscom \ user=aurora_master_user \ password=aurora_user_password Note: This migration approach requires application downtime while the dump and import are in progress You can avoid application downtime by extending the scenario with MySQL binary log replication See the Self Managed Migration with Near Zero Downtime section for more details This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 33 FlatFile Migration Using Files in CSV Format This scenario demonstrates a schema and data migration using flat file dumps that is dumps that do not encapsulate data in SQL statements Many database administrators prefer to use flat files over SQL format files for the following reasons: • Lack of SQL encap sulation results in smaller dump files and reduces processing overhead during import • Flatfile dumps are easier to process using OS level tools; they are also easier to manage (eg split or combine) • Flatfile formats are compatible with a wide range of database engines both SQL and NoSQL Approach The scenario uses a hybrid migration approach: • Use the mysqldump utility to create a schema only dump in SQL format The dump describes the structure of schema objects (eg tables views and functions) but does not contain data • Use SELECT INTO OUTFILE SQL commands to create dataonly dumps in CSV format The dumps are created in a one filepertable fashion and contain table data only (no schema definitions) The import phase can be executed in two ways: • Traditional approach: Transfer all dump files to an Amazon EC2 instance located in the same AWS Region and Availability Zone as the target Aurora DB cluster After transferring the dump files you can import them into Amazon Aurora using the mysql command line client and LOAD DATA LOCAL INFILE SQL commands for SQL format schema dumps and the flat file data dumps respectively This is the approach that is demonstrated later in this section This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 34 • Alternative approach: Transfer the SQL format schema dumps t o an Amazon EC2 client instance and import them using the mysql command line client You can transfer the flat file data dumps to an Amazon S3 bucket and then import them into Amazon Aurora using LOAD DATA FROM S3 SQL commands For more information including an example of loading data from Amazon S3 see Migrating Data from MySQL by Using an Amazon S3 Bucket in the Amazon RDS User Guide When to Use You can adopt this approach for most migration projects where performance and flexibility are important: • You can dump small data sets and import them one table at a time You can also run multiple SELECT INTO OUTFILE and LOAD DATA INFILE operations in parallel for best performance • Data that is stored in flat file dumps is not encapsulated in database specific SQL statements Therefore it can be handled and processed easily by the systems participating in the data exchange When to Consider Other Options You might choose not to use this approach if any of the following conditions are true: • You are migrating from an RDS MySQL DB instance or a self managed MySQL 56 database In that case you might get better results with snapshot migration or Percona XtraBackup respectively See the Migrating from Amazon RDS for MySQL and Percona XtraBackup sections for more details • The data set is very small and does not require a high performance migration approach • You want the migration process to be as simple as possible and you don’t require any of the performance and flexibility benefits listed earlier Notes To simplify the demons tration this scenario assumes the following: This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 35 1 Migration commands are executed from client instances running a Linux operating system: o Client instance A is located in the source server’s network o Client instance B is located in the same Amazon VPC Availability Zone and Subnet as the target Aurora DB cluster 2 The source server is a self managed MySQL database (eg running on Amazon EC2 or on premises) configured to allow connections from client instance A 3 The target Aurora DB cluster already exist s and is configured to allow connections from client instance B If you don’t have an Aurora DB cluster yet review the stepbystep cluster launch instruct ions in the Amazon RDS User Guide 4 Communication is allowed between both client instances 5 Export from the source database is performed using a privileged super user MySQL account For simplicity this scenario assumes that the user holds all permissions available in MySQL 6 Import into Amazon Aurora is performed using the master user account that is the account whose name and password were specified during the cluster launch process Note that this migration approach requires application downtime while t he dump and import are in progress You can avoid application downtime by extending the scenario with MySQL binary log replication See the Self Managed Migration with Near Zero Downtime sectio n for more details Examples In this scenario you migrate a MySQL schema named myschema The first step of the migration is to create a schema only dump of all objects mysqldump host=<source_server_address> \ user=<source_user> \ password=<source_user_password> \ databases <schema(s)> \ singletransaction \ nodata > myschema_dumpsql This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 36 Descriptions of the options and option values for the mysqldump command are as follows: • <source_server_address> : DNS name or IP address of th e source server • <source_user> : MySQL user account name on the source server • <source_user_password> : MySQL user account password on the source server • <schema(s)> : One or more schema names • <target_cluster_endpoint> : Cluster DNS endpoint of the target Aur ora cluster • <target_user> : Aurora master user name • <target_user_password> : Aurora master user password • single transaction : Enforces a consistent dump from the source database Can be skipped if the source database is not receiving any write traffic • nodata : Creates a schema only dump without row data For more details see mysqldump in the MySQL 56 Reference Manual Example: admin@clientA:~$ mysqldump host=11223344 user=root \ password=pAssw0rd databases myschema \ singletransaction nodata > myschema_dump_schema_onlysql After you complete the schema only dump you can obtain data dumps for each table After logging in to the source MyS QL server use the SELECT INTO OUTFILE statement to dump each table’s data into a separate CSV file admin@clientA:~$ mysql host=11223344 user=root password=pAssw0rd mysql> show tables from myschema; + + This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 37 | Tables_in_myschema | + + | t1 | | t2 | | t3 | | t4 | + + 4 rows in set (000 sec) mysql> SELECT * INTO OUTFILE '/home/admin/dump/myschema_dump_t1csv' FIELDS TERMINATED BY '' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY ' \n' FROM myschemat1; Query OK 4194304 rows affected (235 sec) (repeat for all remaining tables) For more information about SELECT INTO statement syntax see SELECT INTO Syntax in the MySQL 56 Reference Manual After you complete all dump operations the /home/admin/dump directory contains five files: one schema only dump and four data dumps on e per table admin@clientA:~/dump$ ls sh1 total 685M 40K myschema_dump_schema_onlysql 172M myschema_dump_t1csv 172M myschema_dump_t2csv 172M myschema_dump_t3csv 172M myschema_dump_t4csv Next you compress and transfer the files to client instance B located in the same AWS Region and Availability Zone as the target Aurora DB cluster You can use any file transfer method available to you (eg FTP or Amazon S3) This example uses SCP with SSH private key authentication admin@clientA:~/dump$ gzip mysc hema_dump_*csv admin@clientA:~/dump$ scp i sshkeypem myschema_dump_* \ <clientB_ssh_user>@<clientB_address>:/home/ec2 user/ This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 38 After transferring all the files you can decompress them and import the schema and data Import the schema dump first because a ll relevant tables must exist before any data can be inserted into them admin@clientB:~/dump$ gunzip myschema_dump_*csvgz admin@clientB:~$ mysql host=<cluster_endpoint> user=master \ password=pAssw0rd < myschema_dump_schema_onlysql With the schem a objects created the next step is to connect to the Aurora DB cluster endpoint and import the data files Note the following: • The mysql client invocation includes a localinfile parameter which is required to enable support for LOAD DATA LOCAL INFILE commands • Before importing data from dump files use a SET command to disable foreign key constraint checks for the duration of the database session Disabling foreign key checks not only improves import performance but it also lets you import data files in arbitrary order admin@clientB:~$ mysql localinfile host=<cluster_endpoint> \ user=master password=pAssw0rd mysql> SET foreign_key_checks = 0; Query OK 0 rows affected (000 sec) mysql> LOAD DATA LOCAL INFILE '/home/ec2 user/myschema_dump_t1csv' > INTO TABLE myschemat1 > FIELDS TERMINATED BY '' OPTIONALLY ENCLOSED BY '"' > LINES TERMINATED BY ' \n'; Query OK 4194304 rows affected (1 min 266 sec) Records: 4194304 Deleted: 0 Skipped: 0 Warnings: 0 (repeat for all rema ining CSV files) This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 39 mysql> SET foreign_key_checks = 1; Query OK 0 rows affected (000 sec) That’s it you have imported the schema and data dumps into the Aurora DB cluster You can find more tips and best practices for self managed migrations in the AWS whitepaper Best Practices for Migrating MySQL Databases to Amazon Aurora Multi Threaded Migration Using mydumper and myloader Mydumper and myloader are popular open source MySQL export/import tools designed to address performance issues associated with the lega cy mysqldump program They operate on SQL format dumps and offer advanced features such as the following: • Dumping and loading data using multiple parallel threads • Creating dump files in a file pertable fashion • Creating chunked dumps in a multiple filespertable fashion • Dumping data and metadata into separate files for easier parsing and management • Configurable transaction size during import • Ability to schedule dumps in regular intervals For more details see the MySQL Data Dumper project page Approach The scenario uses the mydumper and myloader tools to perform a multi threaded schema and data migration without the need to manually invoke any SQL commands or desig n custom migration scripts The migration is performed in two steps: This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 40 1 Use the mydumper tool to create a schema and data dump using multiple parallel threads 2 Use the myloader tool to process the dump files and import them into an Aurora DB cluster also in multi threaded fashion Note that mydumper and myloader might not be readily available in the package repository of your Linux/Unix distribution For your convenience the scenario also shows how to build the tools from source code When to Use You can adopt this approach in most migration projects: • The utilities are easy to use and enable database users to perform multi threaded dumps and imports without the need to develop custom migration scripts • Both tools are highly flexible and have reasonable co nfiguration defaults You can adjust the default configuration to satisfy the requirements of both small and large scale migrations When to Consider Other Options You might decide not to use this approach if any of the following conditions are true: • You are migrating from an RDS MySQL DB instance or a self managed MySQL 55 or 56 database In that case you might get better results with snapshot migration or Percona XtraBackup respectively See the Migrating from Amazon RDS for MySQL and Percona XtraBackup sections for more details • You can’t use third party software because of operating system limitations • Your data transformation processes require intermediate dump files in a flat file forma t and not an SQL format Notes To simplify the demonstration this scenario assumes the following: This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 41 1 You execute the migration commands from client instances running a Linux operating system: a Client instance A is located in the source server’s network b Clien t instance B is located in the same Amazon VPC Availability Zone and Subnet as the target Aurora cluster 2 The source server is a self managed MySQL database (eg running on Amazon EC2 or on premises) configured to allow connections from client instance A 3 The target Aurora DB cluster already exists and is configured to allow connections from client instance B If you don’t have an Aurora DB cluster yet review the stepbystep cluster launch instructions in the Amazon RDS User Guide 4 Communication is allowed between both client instances 5 You perform the export from the source database using a privileged super user MySQL account For simplicity the example assumes that the user holds all permissions available in MySQL 6 You perform the import into Amazon Aurora using the master user account that is the account whose n ame and password were specified during the cluster launch process 7 The Amazon Linux 2016033 operating system is used to demonstrate the configuration and compilation steps for mydumper and myloader Note : This migration approach requires application down time while the dump and import are in progress You can avoid application downtime by extending the scenario with MySQL binary log replication See the Self Managed Migration with Near Zero Dow ntime section for more details Examples (Preparing Tools) The first step is to obtain and build the mydumper and myloader tools See the MySQL Data Dumper project page for up todate download links and to ensure that tools are prepared on both client instances This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 42 The utilities depend on several packages that you should install first [ec2user@clientA ~]$ sudo yum install glib2 devel mysql56 \ mysql56devel zlib devel pcre devel openssl devel g++ gcc c++ cmake The next steps involve creating a directory to hold the program sources and then fetching and unpacking the source archive [ec2user@clientA ~]$ mkdir mydumper [ec2 user@clientA ~]$ cd mydumper/ [ec2user@clientA mydumper]$ wget https://launchp adnet/mydumper/09/091/+download/mydumper 091targz 20160629 21:39:03 (153 KB/s) ‘mydumper 091targz’ saved [44463/44463] [ec2user@clientA mydumper]$ tar zxf mydumper 091targz [ec2user@clientA mydumper]$ cd mydumper 091 Next you b uild the binary executables [ec2user@clientA mydumper 091]$ cmake (…) [ec2user@clientA mydumper 091]$ make Scanning dependencies of target mydumper [ 25%] Building C object CMakeFiles/mydumperdir/mydumperco [ 50%] Building C object CMakeFiles/mydumperdir/server_detectco [ 75%] Building C object CMakeFiles/mydumperdir/g_unix_signalco Linking C executable mydumper [ 75%] Built target mydumper Scanning dependencies of target myloader [100%] Building C object CMakeFiles/myloaderdi r/myloaderco Linking C executable myloader [100%] Built target myloader This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 43 Optionally you can move the binaries to a location defined in the operating system $PATH so that they can be executed more conveniently [ec2user@clientA mydumper 091]$ sudo mv mydumper /usr/local/bin/mydumper [ec2user@clientA mydumper 091]$ sudo mv myloader /usr/local/bin/myloader As a final step confirm that both utilities are available in the system [ec2user@clientA ~]$ mydumper V mydumper 091 built against MySQL 5631 [ec2user@clientA ~]$ myloader V myloader 091 built against MySQL 5631 Examples (Migration) After completing the preparation steps you can perform the migration The mydumper command uses the following basic syntax mydumper h <source_serve r_address> u <source_user> \ p <source_user_password> B <source_schema> \ t <thread_count> o <output_directory> Descriptions of the parameter values are as follows: • <source_server_address> : DNS name or IP address of the source server • <source_user> : MySQL user account name on the source server • <source_user_password> : MySQL user account password on the source server • <source_schema> : Name of the schema to dump • <thread_count> : Number of parallel threads used to dump the data This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 44 • <output_directory> : Name of the directory where dump files should be placed Note : mydumper is a highly customizable data dumping tool For a complete list of supported parameters and their default values use the builtin help mydumper help The example dump is executed as follows [ec2user@clientA ~]$ mydumper h 11223344 u root \ p pAssw0rd B myschema t 4 o myschema_dump/ The operation results in the following files being created in the dump directory [ec2user@clientA ~]$ ls sh1 myschema_dum p/ total 733M 40K metadata 40K myschema schemacreatesql 40K myschemat1 schemasql 184M myschemat1sql 40K myschemat2 schemasql 184M myschemat2sql 40K myschemat3 schemasql 184M myschemat3sql 40K myschemat4 schemasql 184M myschemat4sql The directory contains a collection of metadata files in addition to schema and data dumps You don’t have to manipulate these files directly It’s enough that the directory structure is understood by the myloader tool Compress the entire directory and transfer it to client instance B [ec2user@clientA ~]$ tar czf myschema_dumptargz myschema_dump [ec2user@clientA ~]$ scp i sshkeypem myschema_dumptargz \ <clientB_ssh_user>@<clientB_address>:/home/ec2 user/ This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 45 When the transfer is complete connect to client instance B and verify that the myloader utility is available [ec2user@clientB ~]$ myloader V myloader 091 built against MySQL 5631 Now you can u npack the dump and import it The syntax used for the myloader command is very similar to what you already used for mydumper The only difference is the d (source directory) parameter replacing the o (target directory) parameter [ec2user@clientB ~]$ tar zxf myschema_dumptargz [ec2user@clientB ~]$ myloader h <cluster_dns_endpoint> \ u master p pAssw0rd B myschema t 4 d myschema_dump/ Useful Tips • The concurrency level (thread count) does not have to be the same for export and import operations A good rule of thumb is to use one thread per server CPU core (for dumps) and one thread per two CPU cores (for imports) • The schema and data dumps produced by mydumper use an SQL format and are compatible with MySQL 56 Although you will typically use the pair of mydumper and myloader tools together for best results technically you can import the dump files from myloader by using any other MySQL compatible client tool You can find more tips and best practices for self managed migrations in t he AWS whitepaper Best Practices for Migrating MySQL Databases to Amazon Aurora Heterogeneous Migrations For detailed step bystep instructions on how to migrate schema and data from a non MySQL compatib le database into an Aurora DB cluster using AWS SCT and AWS DMS see the AWS whitepaper Migrating Your Databases to Amazon Aurora Prior to running migration we suggest you to review Proof of Concept with Aurora to This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 46 understand the volume of data and representative of your production environment as a blueprint Testing and Cutover Once the schema and data have been successfully migrated from the source database to Amazon Aurora you are no w ready to perform end toend testing of your migration process The testing approach should be refined after each test migration and the final migration plan should include a test plan that ensures adequate testing of the migrated database Migration T esting Test Category Purpose Basic acceptance tests These pre cutover tests should be automatically executed upon completion of the data migration process Their primary purpose is to verify whether the data migration was successful Following are some common outputs from these tests: • Total number of items processed • Total number of items imported • Total number of items skipped • Total number of warnings • Total number of errors If any of these totals reported by the tests deviate from the expec ted values then it means the migration was not successful and the issues need to be resolved before moving to the next step in the process or the next round of testing This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 47 Test Category Purpose Functional tests These post cutover tests exercise the functionality of the applicat ion(s) using Aurora for data storage They include a combination of automated and manual tests The primary purpose of the functional tests is to identify problems in the application caused by the migration of the data to Aurora Nonfunctional tests Thes e post cutover tests assess the nonfunctional characteristics of the application such as performance under varying levels of load User acceptance tests These post cutover tests should be executed by the end users of the application once the final data migration and cutover is complete The purpose of these tests is for the end users to decide if the application is sufficiently usable to meet its primary function in the organization Cutover Once you have completed the final migration and testing it is time to point your application to the Amazon Aurora database This phase of migration is known as cutover If the planning and testing phase has been executed properly cutover should not lead to unexpected issues Precutover Actions • Choose a cutover window: Identify a block of time when you can accomplish cutover to the new database with minimum disruption to the business Normally you would select a low activity period for the database (typically nights and/or weekends) This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 48 • Make sure changes are caught up: If a near zero downtime migration approach was used to replicate database changes from the source to the target database make sure that all database changes are caught up and your target database is not significantly lagging behind the sour ce database • Prepare scripts to make the application configuration changes: In order to accomplish the cutover you need to modify database connection details in your application configuration files Large and complex applications may require updates to co nnection details in multiple places Make sure you have the necessary scripts ready to update the connection configuration quickly and reliably • Stop the application: Stop the application processes on the source database and put the source database in read only mode so that no further writes can be made to the source database If the source database changes aren’t fully caught up with the target database wait for some time while these changes are fully propagated to the target database • Execute pre cutove r tests: Run automated pre cutover tests to make sure that the data migration was successful Cutover • Execute cutover: If pre cutover checks were completed successfully you can now point your application to Amazon Aurora Execute scripts created in the p re cutover phase to change the application configuration to point to the new Aurora database • Start your application: At this point you may start your application If you have an ability to stop users from accessing the application while the application is running exercise that option until you have executed your post cutover checks Post cutover Checks • Execute post cutover tests: Execute predefined automated or manual test cases to make sure your application works as expected with the new database It ’s a good strategy to start testing read only functionality of the database first before executing tests that write to the database This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 49 Enable user access and closely monitor: If your test cases were executed successfully you may give user access to the app lication to complete the migration process Both application and database should be closely monitored at this time Troubleshooting The following sections provide examples of common issues and error messages to help you troubleshoot heterogenous DMS migrat ions Troubleshooting MyS QL Specific Issues The following issues are specific to using AWS DMS with MySQL databases Topics • CDC Task Failing for Amazon RDS DB Instance Endpoint Because Binary Logging Disabled • Connections to a target MySQL instance are disconnected during a task • Adding Autocommit to a MySQL compatible Endpoint • Disable Foreign Keys on a Target MySQL compatible Endpoint • Characters Replaced with Question Mark • "Bad event" Log Entries • Change Data Capture with MySQL 55 • Increasing Binary Log Retention for Amazon RDS DB Instances • Log Message: Some changes from the source database had no impact when applied to the target database • Error: Identifier too long • Error: Unsupported Character Set Causes Field Data Conversion to Fail • Error: Codepage 1252 to UTF8 [120112] A field data conversion failed This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 50 CDC Task Failing for Amazon RDS DB Instance E ndpoint Because Binary Logging Disabled This issue occurs with Amazon RDS DB instances because automated backups are disabled Enable automatic backups by setting the backup retention period to a non zero value Connections to a target MySQL instance are disconnected during a task If you have a task with LOBs that is getting disconnected from a MySQL target with the following type of errors in the task log you might need to adjust some of your task settings [TARGET_LOAD ]E: RetCode: SQL_ ERROR SqlState : 08S01 NativeError: 2013 Message: [ MySQL][ODBC 53(w) Driver ][mysqld5716log]Lost connection to MySQL server during query [122502] ODBC general error To solve the issue where a task is being disconnected from a MySQL target do the following: • Check that you have your database variable max_allowed_packet set large enough to hold your largest LOB • Check that you have the following variables set to have a large timeout value We suggest you use a value of at least 5 minutes for each of these variables o net_read_timeout o net_write_timeout o wait_timeout o interactive_timeout Adding Autocommit to a MySQL compatible Endpoint To add autocommit to a target MySQL compatible endpoint use the following procedure: This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 51 1 Sign in to the AWS Management Console and sel ect DMS 2 Select Endpoints 3 Select the MySQL compatible target endpoint that you want to add autocommit to 4 Select Modify 5 Select Advanced and then add the following code to the Extra connection attributes text box: Initstmt = SET AUTOCOMMIT= 1 6 Choose Modify Disable Foreign Keys on a Target MySQL compatible Endpoint You can disable foreign key checks on MySQL by adding the following to the Extra Connection Attributes in the Advanced section of the target MySQL Am azon Aurora with MySQL compatibility or MariaDB endpoint To disable foreign keys on a target MySQL compatible endpoint use the following procedure: 1 Sign in to the AWS Management Console and select DMS 2 Select Endpoints 3 Select the MySQL Aurora MySQL or MariaDB target endpoint that you want to disable foreign keys 4 Select Modify 5 Select Advanced and then add the following code to the Extra connection attributes text box: Initstmt =SET FOREIGN_KEY_CHECKS= 0 This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 52 6 Choose Modify Characters Replaced with Question Mark The most common situation that causes this issue is when the source endpoint characters have been encoded by a character set that AWS DMS doesn't support For example AWS DMS engine versions prior to version 311 do n't support the UTF8MB4 character set Bad event Log Entries Bad event entries in the migration logs usually indicate that an unsupported DDL operation was attempted on the source database endpoint Unsupported DDL operations cause an event that the repli cation instance cannot skip so a bad event is logged To fix this issue restart the task from the beginning which will reload the tables and will start capturing changes at a point after the unsupported DDL operation was issued Change Data Capture with MySQL 55 AWS DMS change data capture (CDC) for Amazon RDS MySQL compatible databases requires full image row based binary logging which is not supported in MySQL version 55 or lower To use AWS DMS CDC you must up upgrade your Amazon RDS DB instance t o MySQL version 56 Increasing Binary Log Retention for Amazon RDS DB Instances AWS DMS requires the retention of binary log files for change data capture To increase log retention on an Amazon RDS DB instance use the following procedure The following example increases the binary log retention to 24 hours call mysqlrds_set_confi guration( 'binlog retention hours' 24); This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 53 Log Message: Some changes from the source database had no impact when applied to the target database When AWS DMS updates a MySQL database column’s value to its existing value a message of zero rows a ffected is returned from MySQL This behavior is unlike other database engines such as Oracle and SQL Server that perform an update of one row even when the replacing value is the same as the current one Error: Identifier too long The following error oc curs when an identifier is too long: TARGET_LOAD E: RetCode: SQL_ERROR SqlState: HY000 NativeError: 1059 Message: MySQLhttp://ODBC 53(w) Driverhttp://mysqld 5610Identifier name '<name>' is too long 122502 ODBC general error (ar_odbc_stmtc: 4054) When AWS DMS is set to create the tables and primary keys in the target database it currently does not use the same names for the Primary Keys that were used in the source database Instead AWS DMS creates the Primary Key na me based on the tables name When the table name is long the auto generated identifier created can be longer than the allowed limits for MySQL The solve this issue currently pre create the tables and Primary Keys in the target database and use a task w ith the task setting Target table preparation mode set to Do nothing or Truncate to populate the target tables Error: Unsupported Character Set Causes Field Data Conversion to Fail The following error occurs when an unsupported character set causes a fi eld data conversion to fail: "[SOURCE_CAPTURE ]E: Column '<column name>' uses an unsupported character set [120112] A field data conversion failed (mysql_endpoint_capturec: 2154) This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 54 This error often occurs because of tables or databases using U TF8MB4 encoding AWS DMS engine versions prior to 311 don't support the UTF8MB4 character set In addition check your database's parameters related to connections The following command can be used to see these parameters: SHOW VARIABLES LIKE '%char%' ; Error: Codepage 1252 to UTF8 [120112] A field data conversion failed The following error can occur during a migration if you have non codepage 1252 characters in the source MySQL database [SOURCE_CAPTURE ]E: Error converting column 'column_xyz' in tabl e 'table_xyz with codepage 1252 to UTF8 [120112] A field data conversion failed (mysql_endpoint_capturec: 2248) As a workaround you can use the CharsetMapping extra connection attribute with your source MySQL endpoint to specify character set mapping You might need to restart the AWS DMS migration task from the beginning if you add this extra connection attribute For example the following extra connection a ttribute could be used for a MySQL source endpoint where the source character set is utf8 or latin1 65001 is the UTF8 code page identifier CharsetMapping =utf865001 CharsetMapping =latin165001 Conclusion Amazon Aurora is a high performance highly available and enterprise grade database built for the cloud Leveraging Amazon Aurora can result in better performance and greater availability than other open source databases and lower costs than most commercial grade databases This paper proposes stra tegies for identifying the best method to migrate databases to Amazon Aurora and details the procedures for planning This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 55 and executing those migrations In particular AWS Database Migration Service (AWS DMS) as well as the AWS Schema Conversion Tool are the r ecommended tools for heterogeneous migration scenarios These powerful tools can greatly reduce the cost and complexity of database migrations Multiple factors contribute to a successful database migration: • The choice of the database product • A migration approach (eg methods tools) that meets performance and uptime requirements • Welldefined migration procedures that enable database administrators to prepare test and complete all migration steps with confidence • The ability to identify diagnose and deal with issues with little or no interruption to the migration process We hope that the guidance provided in this document will help you introduce meaningful improvements in all of these areas and that it will ultimately contribute to creating a bette r overall experience for your database migrations into Amazon Aurora Contributors Contributors to this document include : • Bala Mugunthan Sr Partner Solution Architect Amazon Web Services • Ashar Abbas Database Specialty Architect • Sijie Han SA Manager A mazon Web Services • Szymon Komendera Database Engineer Amazon Web Services This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 56 Further Reading For additional information see: • Aurora on Amazon RDS User Guide • Migrating Your Databases t o Amazon Aurora AWS whitepaper • Best Practices for Migrating MySQL Databases to Amazon Aurora AWS whitepaper Document Revisions Date Description July 2020 Added information for the large databases migrations on Amazon Aurora and functional p artition and data shard consolidation strategies are discussed in homogenous migration s ection s Multi threaded migration using mydumper and myload er open source tools are introduced Overall basic acceptance testing functional test non functional test and user acceptance tests are explained in the testing phase and pre cutover and post cut overs phase scenarios are further explained September 2019 First publication
General
A_Practical_Guide_to_Cloud_Migration_Migrating_Services_to_AWS
Archived A Practical Gui de to Cl oud Migration Migratin g Service s to AWS December 2015 This paper has been archived For the latest technical content see: https://docsawsamazoncom/prescriptiveguidance/latest/mrpsolution/mrpsolutionpdfArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 2 of 13 © 2015 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice C ustomers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document do es not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this docum ent is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 3 of 13 Contents Abstract 3 Introduction 4 AWS Cloud Adoption Framework 4 Manageable Areas of Focus 4 Successful Migrations 5 Breaking Down the Economics 6 Understand OnPremises Costs 6 Migration Cost Considerations 8 Migration Options 10 Conclusion 12 Further Reading 13 Contributors 13 Abstract To achieve full benefits of moving applications to the Amazon Web Services (AWS) platform it is critical to design a cloud migration model that delivers optimal cost efficiency This includes establishing a compelling business case acquiring new skills within the IT organization implemen ting new business processes and defining the application migration methodology to transform your business model from a traditional on premises computing platform to a cloud infrastructure ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 4 of 13 Perspective Areas of Focus Introduction Cloudbased computing introduces a radical shift in how technology is obtained used and managed as well as how organizations budget and pay for technology services With the AWS cloud platform project teams can easily configure the virtual network using t heir AWS account to launch new computing environments in a matter of minutes Organizations can optimize spending with the ability to quickly reconfigure the computing environment to adapt to changing business requirements Capacity can be automatically sc aled —up or down —to meet fluctuating usage patterns Services can be temporarily taken offline or shut down permanently as business demands dictate In addition with pay peruse billing AWS services become an operational expense rather than a capital expense AWS Cloud Adoption Framework Each organization will experience a unique cloud adoption journey but benefit from a structured framework that guides them through the process of transforming their people processes and technology The AWS Cloud Adopt ion Framework (AWS CAF) offers structure to help organizations develop an efficient and effective plan for their cloud adoption journey Guidance and best practices prescribed within the framework can help you build a comprehensive approach to cloud comput ing across your organization throughout your IT lifecycle Manageable Areas of Focus The AWS CAF breaks down the complicated planning process into manageable areas of focus Perspectives represent top level areas of focus spanning people process and te chnology Components identify specific aspects within each Perspective that require attention while Activities provide prescriptive guidance to help build actionable plans The AWS Cloud Adoption Framework is flexible and adaptable allowing organizations to use Perspectives Components and Activities as building blocks for their unique journey Business Perspective Focuses on identifying measuring and creating business value using technology services The Components and Activities within the Business Perspective can help you develop a business case for cloud align ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 5 of 13 business and technology strategy and support stakeholder engagement Platform Perspective Focuses on describing the structure and relationship of technology elements and services in complex IT environments Components and Activities within the Perspective can help you develop conceptual and functional models of your IT environment Maturity Perspective Focuses on defining the target state of an organization's capabilities measuring maturity and optimizing resources Components within Maturity Perspective can help assess the organization's maturity level develop a heat map to prioritize initiatives and sequence initiatives to develop the roadm ap for execution People Perspective Focuses on organizational capacity capability and change management functions required to implement change throughout the organization Components and Activities in the Perspective assist with defining capability and skill requirements assessing current organizational state acquiring necessary skills and organizational re alignment Process Perspective Focuses on managing portfolios programs and proj ects to deliver expected business outcome on time and within budget while keeping risks at acceptable levels Operations Perspective Focuses on enabling the ongoing operation of IT environments Components and Activities guide operating procedures service management change management and recovery Security Perspective Focuse s on helping organizations achieve risk management and compliance goals with guidance enabling rigorous methods to describe structure of security and compliance processes systems and personnel Components and Activities assist with assessment control selection and compliance validation with DevSecOps principles and automation Successful Migrations The path to the cloud is a journey to business results AWS has helped hundreds of customers achieve their business goals at every stage of their journey While every organization’s path will be unique there are common patterns approaches and best pract ices that can be implemented to streamline the process 1 Define your approach to cloud computing from business case to strategy to change management to technology 2 Build a solid foundation for your enterprise workloads on AWS by assessing and validating yo ur application portfolio and integrating your unique IT environment with solutions based on AWS cloud services Perspective Areas of Focus ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 6 of 13 3 Design and optimize your business applications to be cloud aware taking direct advantage of the benefits of AWS services 4 Meet your internal and external compliance requirements by developing and implementing automated security policies and controls based on proven validated designs Early planning communication and buy in are essential Understanding the forcing function (tim e cost availability etc) is key and will be different for each organization When defining the migration model organizations must have a clear strategy map out a realistic project timeline and limit the number of variables and dependencies for trans itioning on premises applications to the cloud Throughout the project build momentum with key constituents with regular meetings and reporting to review progress and status of the migration project to keep people enthused while also setting realistic ex pectations about the availability timeframe Breaking Down the Economics Understand On Premises Costs Having a clear understanding of your current costs is an important first step of your journey This provides the baseline for defining the migration model that delivers optimal cost efficiency Onpremises data centers have costs associated with the servers storage networking power cooling physical space and IT labor required to support applications and services running in the production environment Although many of these costs will be eliminated or reduced after applications and infrastructure are moved to the AWS platform knowing your current run rate will help determine which applications are good candidates to move to AWS which applications need to be rewrit ten to benefit from cloud efficiencies and which applications should be retired The following questions should be evaluated when calculating the cost of on premises computing: Understanding Costs To build a migration model for optimal efficiency it is important to accurately understand the current costs of running onpremises applications as well as the interim costs incurred during the transition ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 7 of 13 “Georgetown’s modernization strategy is not just about upgrading old systems; it is about changing the way we do business building new partnerships with the community and working to embrace innovation Cloud has been an important component of this Although we thought the primary driver would be cost savings we have found that agility innovation and the opportuni ty to change paths is where the true value of the cloud has impacted our environment “Traditional IT models with heavy customization and sunk costs in capital infrastructures —where 90% of spend is just to keep the trains running —does not give you the opp ortunity to keep up and grow” Beth Ann Bergsmark Interim Deputy CIO and AVP Chief Enterprise Architect Georgetown University  Labor How much do you spend on maintaining your environment (broken disks patching hosts servers going offline etc)?  Network How much bandwidth do you need? What is your bandwidth peak to average ratio? What are you assuming for network gear? What if you need to scale beyond a single rack?  Capacity What is the cost of over provisioning for peak capacity? How do you plan for capacity? How much buffer capacity are you planning on carrying? If small what is your plan if you need to add more? What if you need less capacity? What is your plan to be abl e to scale down costs? How many servers have you added in the past year? Anticipating next year?  Availability / Power Do you have a disaster recovery (DR) facility? What was your power utility bill for your data center(s) last year? Have you budgeted for both average and peak power requirements? Do you have separate costs for cooling/ HVAC? Are you accounting for 2N power? If not what happens when you have a power issue to your rack?  Servers What is your average server utilization? How much do you overpr ovision for peak load? What is the cost of over provisioning?  Space Will you run out of data center space? When is your lease up? ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 8 of 13 Migration Cost Considerations To achieve the maximum benefits of adopting the AWS cloud platform new work pract ices that drive efficiency and agility will need to be implemented:  IT staff will need to acquire new skills  New business processes will need to be defined  Existing business processes will need to be modified Migration Bubble AWS uses the term “migration bubble” to describe the time and cost of moving applications and infrastructure from on premises data centers to the AWS platform Although the cloud can provide significant savings costs may increase as you move into the migration bubble It i s important to plan the migration to coincide with hardware retirement license and maintenance expiration and other opportunities to reduce cost The savings and cost avoidance associated with a full all in migration to AWS will allow you to fund the mig ration bubble and even shorten the duration by applying more resources when appropriate Time Figure 1: Migration Bubble Level of Effort The cost of migration has many levers that can be pulled in order to speed up or slow down the process including labor process tooling consulting and technology Each of these has a corresponding cost associated with it based on the level of effort required to move the application to the AWS platform Migration Bubble Planning • • • • • • Planning and Assessment Duplicate Environments Staff Training Migration Consulting 3rd Party Tooling Lease Penalties Operation and Optimization Cost of Migration $ ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 9 of 13 To calculate a realistic total cost of ownership (TCO) you need to understand what these costs are and plan for them Cost considerations include items such as:  Labor During the transition existing staff will need to continue to maintain the production environment learn new skills and decommission the old infrastructure once the migration is complete Additional labor costs in the migration bubble include:  Staff time to plan and assess project scope and project plan to migrate applications and infrastructure  Retaining consulting partners with the expertise to streamline migration of applications and infrastructure as well as training staff with new skills  Due to the general lack of cloud experience for most organization s it is necessary to bring in outside consulting support to help guide the process  Process Penalty fees associated with early termination of contracts may be incurred (facilities software licenses etc) once applications or infrastructure are decommissioned  The cost of tooling to automate the migration of data and virtual machines from on premises to AWS  Technology Duplicate environments will be required to keep production applications/infrastructure available while transitioning to the AWS platform Cost considerations include:  Cost to maintain production environment during migration  Cost of AWS platform comp onents to run new cloud based applications  Licensing of automated migration tools license to accelerate the migration process ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 10 of 13 “I wanted to move to a model where we can deliver more to our citizens and r educe the cost of delivering those services to them I wanted a product line that has the ability to scale and grow with my department AWS was an easy fit for us and the way we do business” Chris Chiancone CIO City of McKinney City of McKinney City of McKinney Texas Turns to AWS to Deliver More Advanced Services for Less Money The City of McKinney Texas about 15 miles north of Dallas and home to 155000 people was ranked the No 1 Best Place to live in 2014 by Money Magazine The city’s IT department is going all in on AWS and uses the platform to run a wide range of services and applications such as its land management and records management systems By using AWS the city’s IT department can focus on delivering new and better services for its fast growing population and city employees instead of spending resources buying and maintaining IT infrastructure City of McKinney chose AWS for our ability to scale and grow with the needs of the city’s IT department AWS provides an easy fit for the way the city does business Without having to own the infrastructure the C ity of McKinney has the ability to use cloud resources to address business needs By moving from a CapEx to an OpEx model they can now return funds to critical city projects Migration Options Once y ou understand the current costs of an on premises production system the next step is to identify applications that will benefit from cloud cost and efficiencies Applications are either critical or strategic If they do not fit into either category they should be taken off the priority list Instead categorize these as legacy applications and determine if they need to be replaced or in some cases eliminated Figure 2 illustrates decision points that should be considered in ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 11 of 13 “A university is really a small city with departments running about 1000 diverse small services across at the university We made the decision to go down the cloud journey and have been working with AWS for the past 4 years In building our business case we wanted the ability to give our customers flexible IT services th at were cost neutral “We embraced a cloud first strategy with all new services a built in the cloud In parallel we are migrating legacy services to the AWS platform with the goal of moving 80% of these applications by the end of 2017” Mike Chapple P hD Senior Director IT Services Delivery University of Notre Dame selecting applications to move to the AWS platform focusing on the “6 Rs” — retire retain re host re platform re purchase and re factor Decommission Refactor for AWS Rebuild Application Architecture AWS VM Import Org/Ops Change Do Not Move Move the App Infrastructure Design Build AWS Lift and Shift (Minimal Change) Determine Migration 3rd Party Tools Impact Analysis Management Plan Identify Environment Process Manually Move App and Data Ops Changes Migration and UAT Testing Signoff Operate Discover Assess (Enterprise Architecture and Determine Migration Path Application Lift and Shift Determine Migration Process Plan Migration and Sequencing 3rd Party Migration Tool Tuning Cutover Applications) Vendor S/PaaS (if available) Move the Application Refactor for AWS Recode App Components Manually Move App and Data Architect AWS Environment Replatform (typically legacy applications) Rearchitect Application Recode Application and Deploy App Migrate Data Figure 2: Migration Options Applications that deliver increased ROI through reduced operation costs or deliver increased business results should be at the top of the priority list Then you can determine the best migration path for each workload to optimize cost in the migration process ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 12 of 13 Conclusion Many organizations are extending or moving their business applications to AWS to simplify infrastructure management deploy quicker provide greater availability increase agility allow for faster innovation and lower cost Having a clear understanding of existing infrastructure costs the components of your migration bubble and their corresponding costs and projected savings will help you calculate payback time and projected ROI With a long history in enabling enterprises to successfully adopt cloud computing Amazon Web Services delivers a mature set of services specifically designed for the unique security compliance privacy and governance requirements of large organizations With a technology platform that is both broad and deep Professional Services and Support organizations robust training programs and an ecosystem tens ofthousands strong AWS can help you move faster and do more With AWS you can:  Take advantage of more services storage options and security controls than any other cloud platform  Deliver on stringent standards with the broadest set of certifications accreditations and controls in the industry  Get deep assistance with our global cloud focused enterprise professional services support and training teams ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 13 of 13 Further Reading For additional help please consult the following sources:  The AWS Cloud Adoption Framework http://d0awsstaticcom/whitepapers/aws_cloud_adoption_frameworkp df Contributors The following individuals and organizations contributed to this document:  Blake Chism Practice Manager AWS Public Sector Sales Var  Carina Veksler Public Sector Solutions AWS Public Sector Sales Var
General
Amazon_Aurora_MySQL_Database_Administrators_Handbook_Connection_Management
This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Aurora MySQL Database Administrato r’s Handbook Connection Management First Published January 2018 Updated October 20 2021 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlContents Introduction 1 DNS endpoints 2 Connection handling in Aurora MySQL and MySQL 2 Common misconceptions 4 Best practices 5 Using smart drivers 5 DNS caching 7 Connection management and pooling 7 Connection scaling 9 Transaction management and autocommit 10 Connection handshakes 12 Load balancing with the reader endpoint 12 Designing for fault tolerance and quick recovery 13 Server configuration 14 Conclusion 16 Contributors 16 Further reading 16 Document revisions 17 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAbstract This paper outlines the best practices for managing database connections setting server connection parameters and configuring client programs drivers and connectors It’s a recommended read for Amazon Aurora MySQL Database Administrators (DBAs) and application developers This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 1 Introduction Amazon Aurora MySQL (Aurora MySQL) is a managed relational database engine wirecompatible with MySQL 56 and 57 Most of the drivers connectors and tools that you currently use with MySQL can be used with Aurora MySQL with little or no change Aurora MySQL database (DB) clusters provide advanced fe atures such as: • One primary instance that supports read/write operations and up to 15 Aurora Replicas that support read only operations Each of the Replicas can be automatically promoted to the primary role if the current primary instance fails • A cluster endpoint that automatically follows the primary instance in case of failover • A reader endpoint that includes all Aurora Replicas and is automatically updated when Aurora Replicas are added or removed • Ability to create custom DNS endpoints contain ing a user configured group of database instances within a single cluster • Internal server connection pooling and thread multiplexing for improved scalability • Near instantaneous database restarts and crash recovery • Access to near realtime cluster metada ta that enables application developers to build smart drivers connecting directly to individual instances based on their read/write or read only role Client side components (applications drivers connectors and proxies) that use sub optimal configurati on might not be able to react to recovery actions and DB cluster topology changes or the reaction might be delayed This can contribute to unexpected downtime and performance issues To prevent that and make the most of Aurora MySQL features AWS encourag es Database Administrators (DBAs) and application developers to implement the best practices outlined in this whitepaper This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 2 DNS endpoints An Aurora DB cluster consists of one or more instances and a cluster volume that manages the data for those instances There are two types of instances: • Primary instance – Supports read and write statements Currently there can be one primary instance per DB cluster • Aurora Replica – Supports read only statements A DB cluster can have up to 15 Aurora Replicas The Auror a Replicas can be used for read scaling and are automatically used as failover targets in case of a primary instance failure Amazon Aurora supports the following types of Domain Name System (DNS) endpoints: • Cluster endpoint – Connects you to the primary instance and automatically follows the primary instance in case of failover that is when the current primary instance is demoted and one of the Aurora Replicas is promoted in its place • Reader endpoint – Includes all Aurora Replicas in the DB cluster und er a single DNS CNAME You can use the reader endpoint to implement DNS round robin load balancing for read only connections • Instance endpoint – Each instance in the DB cluster has its own individual endpoint You can use this endpoint to connect directly to a specific instance • Custom endpoints – User defined DNS endpoints containing a selected group of instances from a given cluster For more information refer to the Overview of Amazon Aurora page Connection handling in Aurora MySQL and MySQL MySQL Community Edition manages connections in a one thread perconnection fashion This means that each individual user connection receives a dedicated operating system thread in the mysqld process Issues with this type of connection handling include: • Relatively high memory use when there is a large number of user connections even if the connections are completely idle • Higher internal server contention and context switching overhead when working with thousands of user connections This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 3 Aurora MySQL supports a thread pool approach that addresses these issues You can characterize the thread pool approach as follows: • It uses thread multiplexing where a number of worker threads can switch between user sessions (connections) A worker thread is not fixe d or dedicated to a single user session Whenever a connection is not active (for example is idle waiting for user input waiting for I/O and so on) the worker thread can switch to another connection and do useful work You can think of worker threads as CPU cores in a multi core system Even though you only have a few cores you can easily run hundreds of programs simultaneously because they're not all active at the same time This highly efficient approach means that Aurora MySQL can handle thousands of concurrent clients with just a handful of worker threads • The thread pool automatically scales itself The Aurora MySQL database process continuously monitors its thread pool state and launches new workers or destroys existing ones as needed This is tr ansparent to the user and doesn’t need any manual configuration Server thread pooling reduces the server side cost of maintaining connections However it doesn’t eliminate the cost of setting up these connections in the first place Opening and closing c onnections isn't as simple as sending a single TCP packet For busy workloads with short lived connections (for example keyvalue or online transaction processing (OLTP) ) consider using an application side connection pool The following is a network pack et trace for a MySQL connection handshake taking place between a client and a MySQL compatible server located in the same Availability Zone: 04:23:29547316 IP client32918 > servermysql: tcp 0 04:23:29547478 IP servermysql > client32918: tcp 0 04:23:29547496 IP client32918 > servermysql: tcp 0 04:23:29547823 IP servermysql > client32918: tcp 78 04:23:29547839 IP client32918 > servermysql: tcp 0 04:23:29547865 IP client32918 > servermysql: tcp 191 04:23:29547993 IP servermysql > client329 18: tcp 0 04:23:29548047 IP servermysql > client32918: tcp 11 04:23:29548091 IP client32918 > servermysql: tcp 37 04:23:29548361 IP servermysql > client32918: tcp 99 04:23:29587272 IP client32918 > servermysql: tcp 0 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 4 This is a packet trace for closing the connection: 04:23:37117523 IP client32918 > servermysql: tcp 13 04:23:37117818 IP servermysql > client32918: tcp 56 04:23:37117842 IP client32918 > servermysql: tcp 0 As you can see even the simple act of opening and closing a single connection involves an exchange of several network packets The connection overhead becomes more pronounced when you consider SQL statements issued by drivers as part of connection setup (for example SET variable_name = value commands used to set session level configuration) Server side thread pooling doesn’t eliminate this type of overhead Common misconceptions The following are common misconceptions for database connection management • If the server uses connection pooling you don’t need a pool on the application side As explained previously this isn’t true for workloads where connections are opened and torn down very frequently and clients run relatively few statements per connectio n You might not need a connection pool if your connections are long lived This means that connection activity time is much longer than the time required to open and close the connection You can run a packet trace with tcpdump and see how many packets yo u need to open or close connections versus how many packets you need to run your queries within those connections Even if the connections are long lived you can still benefit from using a connection pool to protect the database against connection surges that is large bursts of new connection attempts • Idle connections don’t use memory This isn’t true because the operating system and the database process both allocate an in memory descriptor for each user connection What is typically true is that Auror a MySQL uses less memory than MySQL Community Edition to maintain the same number of connections However memory usage for idle connections is still not zero even with Aurora MySQL The general best practice is to avoid opening significantly more connect ions than you need This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 5 • Downtime depends entirely on database stability and database features This isn’t true because the application design and configuration play an important role in determining how fast user traffic can recover following a database event For more details refer to the Best practices section of this whitepaper Best practices The following are best practices for managing database connections and configuring connection drivers and pools Using smart drivers The cluster and reader endpoints abstract the role changes (primary instance promotion and demotion) and topology changes (addition and removal of instances) occurring in the DB cluster However DNS updates are not instantaneous In addition they can sometimes contribute to a slightly longer delay between the time a database event occurs and the time it’s noticed and handled by the application Aurora MySQL exposes near realtime metadata about DB instances in the INFORMATION_SCHEMAREPLICA_HOST_STATUS table Here is an example of a query against the metadata table: mysql> select server_id if(session_id = 'MASTER_SESSION_ID' 'writer' 'reader' ) as role replica_lag_in_milliseconds from information_schemareplica_host_status; + + + + | server_id | role | replica_lag_in_milliseconds | + + + + | aurora nodeusw2a | writer | 0 | | aurora nodeusw2b | reader | 19253999710083008 | + + + + 2 rows in set (000 sec) Notice that the table contains cluster wide metadata You can query the table on any instance in the DB cluster For the purpose of this whitepaper a smart driver is a database driver or connector with the ability to read DB cluster topology from the metadata table It can rou te new connections to individual instance endpoints without relying on high level cluster This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 6 endpoints A smart driver is also typically capable of load balancing read only connections across the available Aurora Replicas in a round robin fashion The MariaDB Connector/J is an example of a third party Java Database Connectivity (JDBC) smart driver with native support for Aurora MySQL DB clusters Application developers can draw inspiration from the MariaDB driver to build drivers and connectors for languages other than Java Refer to the MariaDB Connector/J page for details The AWS JDBC Driver for MySQL (preview) is a client driver designed for the high availability of Aurora MySQL The AWS JDBC Driver for MySQL is drop in compatible with the MySQL Connector/J driver The AWS JDBC Driver for MySQL takes full advantage of the failover capabilities of Aurora MySQL The AWS JDBC Driver for MySQL fully maintains a cache of the DB cluster topology and each DB in stance's role either primary DB instance or Aurora Replica It uses this topology to bypass the delays caused by DNS resolution so that a connection to the new primary DB instance is established as fast as possible Refer to the AWS JDBC Driver for MySQL GitHub repository for details If you’re using a smart driver the recommendations listed in the following sections still apply A smart driver can automate and abstract certain layers of database connectivity However it doesn’t automatically configure itself with optimal settings or automatically make the application resilient to failures For example when using a smart driver you still need to ensure that the connection val idation and recycling functions are configured correctly there’s no excessive DNS caching in the underlying system and network layers transactions are managed correctly and so on It’s a good idea to evaluate the use of smart drivers in your setup Note that if a third party driver contains Aurora MySQL –specific functionality it doesn’t mean that it has been officially tested validated or certified by AWS Also note that due to the advanced builtin features and higher overall complexity smart driver s are likely to receive updates and bug fixes more frequently than traditional (bare bones) drivers You should regularly review the driver’s release notes and use the latest available version whenever possible This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 7 DNS caching Unless you use a smart databas e driver you depend on DNS record updates and DNS propagation for failovers instance scaling and load balancing across Aurora Replicas Currently Aurora DNS zones use a short Time ToLive (TTL) of five seconds Ensure that your network and client confi gurations don’t further increase the DNS cache TTL Remember that DNS caching can occur anywhere from your network layer through the operating system to the application container For example Java virtual machines (JVMs) are notorious for caching DNS in definitely unless configured otherwise Here are some examples of issues that can occur if you don’t follow DNS caching best practices: • After a new primary instance is promoted during a failover applications continue to send write traffic to the old insta nce Data modifying statements will fail because that instance is no longer the primary instance • After a DB instance is scaled up or down applications are unable to connect to it Due to DNS caching applications continue to use the old IP address of tha t instance which is no longer valid • Aurora Replicas can experience unequal utilization for example one DB instance receiving significantly more traffic than the others Connection management and pooling Always close database connections explicitly inst ead of relying on the development framework or language destructors to do it There are situations especially in container based or code asaservice scenarios when the underlying code container isn’t immediately destroyed after the code completes In su ch cases you might experience database connection leaks where connections are left open and continue to hold resources (for example memory and locks) If you can’t rely on client applications (or interactive clients) to close idle connections use the server’s wait_timeout and interactive_timeout parameters to configure idle connection timeout The default timeout value is fairly high at 28800 seconds ( 8 hours) You should tune it down to a value that’s acceptable in your environment Refer to the MySQL Reference Manual for details Consider using connection pooling to protect the database against connection surges Also consider connection pooling if the appli cation opens large numbers of connections (for example thousands or more per second) and the connections are short lived that is the time required for connection setup and teardown is significant compared to the This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 8 total connection lifetime If your develo pment framework or language doesn’t support connection pooling you can use a connection proxy instead Amazon RDS Proxy is a fully managed highly available database proxy for Amazon Relational Database Service (Amazon RDS) that makes applications more scalable more resilient to database failures and more secure ProxySQL MaxScale and ScaleArc are examples of third party proxies compatible with the MySQL protocol Refer to the Connection scaling section of this document for more notes on connection pools versus proxies By using Amazon RDS Proxy you can allow your applications to pool and share database connections to improve their ability to scale Amazon RDS Proxy make s applications more resilient to database failures by automatically connecting to a standby DB instance while preserving application connections AWS recommend s the following for configuring connection pools and proxies: • Check and validate connection healt h when the connection is borrowed from the pool The validation query can be as simple as SELECT 1 However in Amazon Aurora you can also use connection checks that return a different value depending on whether the instance is a primary instance (read/wri te) or an Aurora Replica (read only) For example you can use the @@innodb_read_only variable to determine the instance role If the variable value is TRUE you're on an Aurora Replica • Check and validate connections periodically even when they're not borrowed It helps detect and clean up broken or unhealthy connections before an application thread attempts to use them • Don't let connections remain in the pool indefinitely Recycle connections by closing and reopening them periodically (for example ev ery 15 minutes) which frees the resources associated with these connections It also helps prevent dangerous situations such as runaway queries or zombie connections that clients have abandoned This recommendation applies to all connections not just idl e ones This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 9 Connection scaling The most common technique for scaling web service capacity is to add or remove application servers (instances) in response to changes in user traffic Each application server can use a database connection pool This approach ca uses the total number of database connections to grow proportionally with the number of application instances For example 20 application servers configured with 200 database connections each would require a total of 4000 database connections If the app lication pool scales up to 200 instances (for example during peak hours) the total connection count will reach 40000 Under a typical web application workload most of these connections are likely idle In extreme cases this can limit database scalabil ity: idle connections do take server resources and you’re opening significantly more of them than you need Also the total number of connections is not easy to control because it’s not something you configure directly but rather depends on the number of application servers You have two options in this situation: • Tune the connection pools on application instances Reduce the number of connections in the pool to the acceptable minimum This can be a stop gap solution but it might not be a long term solut ion as your application server fleet continues to grow • Introduce a connection proxy between the database and the application On one side the proxy connects to the database with a fixed number of connections On the other side the proxy accepts applicat ion connections and can provide additional features such as query caching connection buffering query rewriting/routing and load balancing Connection proxies • Amazon RDS Proxy is a fully managed highly available database proxy for Amazon RDS that makes applications more scalable more resilient to database failures and more secure Amazon RDS Proxy reduces the memory and CPU overhead for connection management on the database • Using Amazon RDS Proxy you can handle unpredictable surges in database traffic that otherwise might cause issues due to oversubscribing connections or creating new connections at a fast rate To protect the database against oversubscription you can control the number of database connections that are created This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 10 • Each RDS proxy performs connection pooling for the writer instance of its associated Amazon RDS or Aurora database Connection pooling is an optimization that reduces the overhead associated with opening and closing connections and with keeping many connections ope n simultaneously This overhead includes memory needed to handle each new connection It also involves CPU overhead to close each connection and open a new one such as Transport Layer Security/Secure Sockets Layer (TLS/SSL) handshaking authentication ne gotiating capabilities and so on Connection pooling simplifies your application logic You don't need to write application code to minimize the number of simultaneous open connections Connection pooling also cuts down on the amount of time a user must w ait to establish a connection to the database • To perform load balancing for read intensive workloads you can create a read only endpoint for RDS proxy That endpoint passes connections to the reader endpoint of the cluster That way your proxy connectio ns can take advantage of Aurora read scalability • ProxySQL MaxScale and ScaleArc are examples of third party proxies compatible with the MySQL protocol For even greater scalability and availability you can use multiple proxy instances behind a single D NS endpoint Transaction management and autocommit With autocommit enabled each SQL statement runs within its own transaction When the statement ends the transaction ends as well Between statements the client connection is not in transaction If you need a transaction to remain open for more than one statement you explicitly begin the transaction run the statements and then commit or roll back the transaction With autocommit disabled the connection is always in transaction You can commit or roll back the current transaction at which point the se rver immediately opens a new one Refer to the MySQL Reference Manual for details Running with autocommit disabled is not recommended because it encourages long running transactions where they’re not needed Open transactions block a server’s internal garbage collection mechanisms which are essential to maintaini ng optimal performance In extreme cases garbage collection backlog leads to excessive storage consumption elevated CPU utilization and query slowness This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 11 Recommendations : • Always run with autocommit mode enabled Set the autocommit parameter to 1 on the database side (which is the default) and on the application side (which might not be the default) • Always double check the autocommit settings on the application side For example Python drivers such as MySQLdb and PyMySQL disable autocommit by default • Manage transactions explicitly by using BEGIN/START TRANSACTION and COMMIT/ROLLBACK statements You should start transactions when you need them and commit as soon as the transactional work is done Note that these recommendations are not specific to Aurora MySQL They apply to MySQL and other databases that use the InnoDB storage engine Long transactions and garbage collection backlog are easy to monitor: • You can obtain the metadata of currently running transactions from the INFORMATION_SCHEMAINNODB_TRX table The TRX_STARTED column contains the transaction start time and you can use it to calculate transaction age A transaction is worth investigating if it has been running for several minutes or more Refer to the MySQL Reference Manua l for details about the table • You can read the size of the garbage collection backlog from the InnoDB’s trx_rseg_history_len counter in the INFORMATION_SCHEMAINNODB_METRICS table Refer to the MySQL Reference Manual for details about the table The larger the counter value is the more severe the impact might be in terms of query performance CPU usage and storage consumption Values in the range of tens of thousands indicate that the garbage collection is somewhat delayed Values in the range of millions or tens of millions might be dangerous and should be investigated Note – In Amazon Aurora all DB instances use the same storage volume which means that the garbage collection is cluster wide and not specific to each instance Consequently a runaway transaction on one instance can impact all instances Therefore you sho uld monitor long transactions on all DB instances This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 12 Connection handshakes A lot of work can happen behind the scenes when an application connector or a graphical user interface (GUI) tool opens a new database session Drivers and client tools commonly run series of statements to set up session configuration (for example SET SESSION variable = value ) This increases the cost of creating new connections and delays when your application can start issuing queries The cost of connection handshakes becomes even more important if your applications are very sensitive to latency OLTP or keyvalue workloads that expect single digit millisecond latency can be visibly impacted if each connection is expensive to open For example if the driver runs six statements to set up a connection and each statement takes just one millisecond to run your application will be delayed by six milliseconds before it issues its first query Recommendations : • Use the Aurora MySQL Advanced Au dit the General Query Log or network level packet traces (for example with tcpdump ) to obtain a record of statements run during a connection handshake Whether or not you’re experiencing connection or latency issues you should be familiar with the inte rnal operations of your database driver • For each handshake statement you should be able to explain its purpose and describe its impact on queries you'll subsequently run on that connection • Each handshake statement requires at least one network roundtrip and will contribute to higher overall se ssion latency If the number of handshake statements appears to be significant relative to the number of statements doing actual work determine if you can disable any of the handshake statements Consider using connection pooling to reduce the number of c onnection handshakes Load balancing with the reader endpoint Because the reader endpoint contains all Aurora Replicas it can provide DNS based round robin load balancing for new connections Every time you resolve the reader endpoint you'll get an inst ance IP that you can connect to chosen in round robin fashion DNS load balancing works at the connection level (not the individual query level) You must keep resolving the endpoint without caching DNS to get a different instance IP on each resolution I f you only resolve the endpoint once and then keep the connection in This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 13 your pool every query on that connection goes to the same instance If you cache DNS you receive the same instance IP each time you resolve the endpoint You can use Amazon RDS Proxy to create additional read only endpoints for an Aurora cluster These endpoints perform the same kind of load balancing as the Aurora reader endpoint Applications can reconnect more quickly to the proxy endpoints than the Aurora reader endpoint if reader in stances become unavailable If you don’t follow best practices these are examples of issues that can occur: • Unequal use of Aurora Replicas for example one of the Aurora Replicas is receiving most or all of the traffic while the other Aurora Replicas sit idle • After you add or scale an Aurora Replica it doesn’t receive traffic or it begins to receive traffic after an unexpectedly long delay • After you remove an Aurora Replica applications continue to send traffic to that instance For more information refer to the DNS endpoints and DNS caching sections of this document Designing for fault tolerance and quick recovery In large scale database operations you’re statistically more likely to experience issues such as connection interruptions or hardware failures You must also take operational actions more frequently such as scaling adding or removing DB instances and performing software upgrades The only scalable way of addressi ng this challenge is to assume that issues and changes will occur and design your applications accordingly Examples : • If Aurora MySQL detects that the primary instance has failed it can promote a new primary instance and fail over to it which typically h appens within 30 seconds Your application should be designed to recognize the change quickly and without manual intervention • If you create additional Aurora Replicas in an Aurora DB cluster your application should automatically recognize the new Aurora Replicas and send traffic to them • If you remove instances from a DB cluster your application should not try to connect to them This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 14 Test your applications extensively and prepare a list of assumptions about how the application should react to database events Then experimentally validate the assumptions If you don’t follow best practices database events (for example failovers scaling and software upgrades) might result in longer than expected downtime For example you might notice that a failover took 30 seconds (per the DB cluster’s event notifications) but the application remained down for much longer Server configuration There are two major server configuration variables worth mentioning in the context of this whitepaper : max_connections and max_connect_errors Configuration variable max_connections The configuration variable max_connections limits the number of database connections per Aurora DB instance The best practice is to set it slightly higher than the maximum number of connections you expect to open on each instance If you also enabled performance_schema be extra careful with the setting The Performance Schema memory structures are sized automatically based on server configuration variables including max_connections The higher you set the variable the more memory Performance Schema uses In extreme cases this can lead to out of memory issues on smaller instance types Note for T2 and T3 instance families Using Performance Schema on T2 and T3 Aurora DB instances with less than 8 GB of memory isn’t recommended To reduce the risk of out ofmemory issues on T2 and T3 instances: • Don’t enable Performance Schema • If you must use Performance Schema leave max_connections at the default value • Disable Performance Schema if you plan to increase max_connections to a value significantly greater than the default value Refer to the MySQL Reference Manual for details about the max_connections variable This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 15 Configuration variable max_connect_errors The configuration variable max_connect_errors determines how many successive interrupted connection requests are permitted from a given client host If the client host exceeds the number of successive failed connection attempts the server blocks it Further connection attempts from that client yield an error: Host 'host_name' is blocked because of many connection errors Unblock with 'mysqladmin flush hosts' A com mon (but incorrect) practice is to set the parameter to a very high value to avoid client connectivity issues This practice isn’t recommended because it: • Allows application owners to tolerate connection problems rather than identify and resolve the underl ying cause Connection issues can impact your application health so they should be resolved rather than ignored • Can hide real threats for example someone actively trying to break into the server If you experience “host is blocked” errors increasing t he value of the max_connect_errors variable isn’t the correct response Instead investigate the server’s diagnostic counters in the aborted_connects status variable and the host_cache table Then use the information to identify and fix clients that run in to connection issues Also note that this parameter has no effect if skip_name_resolve is set to 1 (default) Refer to the MySQL Reference Manual for details on the following: • Max_connect_errors variable • “Host is blocked ” error • Aborted_connects status variable • Host_cache table This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 16 Conclusion Understanding and implementing connection management best practices is critical to achieve scalability reduce downtime and ensure smooth integration between the application and database layers You can apply most of the recommendations provided in this whitepaper with little to no engineering effort The guidance provided in this whitepaper should help you introduce improvements in your current and future application deployments using Aurora MySQL DB clusters Contributor s Contributors to this document include: • Szymon Komendera Database Engineer Amazon Aurora • Samuel Selvan Database Specialist Solutions Architect Amazon Web Services Further reading For additional information refer to : • Aurora on Amazon RDS User Guide • Communication Errors and Aborted Connections in MySQL Reference Manual This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 17 Document revisions Date Description October 20 2021 Minor content updates to follow new style guide and hyperlinks July 2021 Minor content updates to the following topics: Smart Drivers Connection Management and Pooling and Connection Scaling March 2019 Minor content updates to the following topics: Introduction DNS Endpoints and Server Configuration January 2018 First publication
General
A_Platform_for_Computing_at_the_Mobile_Edge_Joint_Solution_with_HPE_Saguna_and_AWS
ArchivedA Platform for Computing at the Mobile Edge: Joint Solution with HPE Saguna and AWS February 2018 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchived© 2018 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its a ffiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedContents Introduction 1 The Business Case for Multi Access Edge Computing 1 MEC Addresses the Need for Localized Cloud Services 2 MEC Leverages the Capabilities Inherent in Mobile Networks 2 MEC Provides a S tandards Based Solution that Enables an Ecosystem of Edge Applications 2 Mobile Edge Solution Overview 4 Example Reference Architectures for Edge Applications 6 Smart City Surveillance 7 AR/VR Edge Applications 10 Connected Vehicle (V2X) 13 Conclusion 15 Contributors 15 Appendix 15 Infrastructure Layer 16 Application Enablement Layer 22 ArchivedAbstract This whitepaper is written for communication service providers with network infrastructure as well as for application developers and technology suppliers who are exploring applications that can benefit from edge computing In this paper we esta blish the value of a standards based computing platform at the mobile network edge describe use cases that are well suited for this platform and present a reference architecture base d on the solutions offered by AWS Saguna and HPE A subset of use cases are reviewed in detail that illustrat e how the reference architecture can be adapted as a platform to serve use case specific requirementsArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 1 Introduction Imagine a world where cars can alert drivers about dangerous road conditions to help them take action to avoid collision and where devices can help fleets of cars drive autonomously and predict traffic patterns Consider a new Industrial Revolution where Internet of Things (IoT) devices or sensors report data collected in real time from large and small machines allowing for intelligent automation and orchestration in industries such as manufacturing agriculture healthcare and logistics Envision city and public services that provide intelligent parking congestion management pollution detection and mitigation emergency response and security While this is happening internet users access bandwidth of 10 times the current maximums and latencies at 1/100 th of current averages using a seamless combination of mobile WiFi and fixed access Fifth generation mobile network (5G) applications are enabling these scenarios by providing 10 ti mes the current bandwidth maximum and 1 This new generation of applications is fueling technological developments and creating new business opportunities for mobile operators One such technological and business development which is key to enabling many new generation of applications is “edge computing ” Edge computing addresses the latency requirements of specialized 5G applications helps manage the potentially exorbitant access cost and network load due to fast growing data demand and support s data localization where necessary By providing a cloud enabled platform for edge computing mobile operators are well positioned to take a leading role in the 5G ecosystem while opening up completely new business cases and revenue streams This whitepaper present s a solution that allow s you to leverage the infrastructure of your existing mobile networks and establis h a platform to enable new revenue generating applications and 5G use case s The Business Case for Multi Access Edge Computing Multi Access Edge Computing (MEC) is a cloud based IT service environment at the edge infrastructure of networks that serves multiple channels of telecommunications access for example mobile wide area networks Wi Fi or LTE based local area networks and wireline ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 2 In this section we discuss the many benefits of a MEC platform that sits at the edge of the cellular mobile network MEC Addresses t he Need for L ocalized Cloud Services Agility scalability ela sticity and cost efficiencies of c loud computing have made it the platform of choice for application development and delivery IoT applications need local cloud services that operat e close to connected devices to improve the economics of telemetry data processing to minimize latency for time critical applications and to ensure that sensitive information is protected locally MEC L everages the Capabilities Inherent in Mobile Network s Mobile networks have expanded to the point where they offer coverage in most countries around the world These networks combine wireless access broadband capacity and security MEC Provides a Standards Based Solution that Enables an Ecos ystem of Edge Applications MEC transforms mobile communication networks into distributed cloud computing platforms that operate at the mobile access network Strategically located in proximity to end users and connected devices MEC enables mobile operators to open their networks to new differentiated services while providing application developers and content providers access to Edge Cloud benefits The ETSI MEC Industry Specification Group (ISG) has defined the first set of standardized APIs and services for MEC The standard is supported by a wide range of industry participants including leading mobile operators and industry vendors Both HP E and Saguna are active members in the ETSI ISG In the following sections we outline the key benefits provided by MEC ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 3 Extremely Low Latency Traditional internet based cloud environments have physical limitations that prohibit you from hosting applications that require extremely low latency Alternatively MEC provides a lowlatency cloud computing env ironment for edge app lications by operating close to end users and connected IoT devices Broadband Delivery Video content is typically delivered using TCP streams When network latency is compounded by congestion users experience annoying delays due to the drop in bitrate The MEC environment provides low latency and minimal jitter which creates a broadband highway for streaming at high bitrates Economical and Scalable In massive IoT uses cases many devices such as sensors or cameras send vast amounts of data upstream which current backhaul networks1 cannot support MEC provides a cloud computing environment at the network edge where IoT data can be aggregated and processed locally thus significantly reducing upstream data MEC infrastructure can scale as you grow by e xpanding local capacity or by deploying additional edge clouds in new locations Privacy and Security By deploying the MEC Edge Cloud locally you can ensure that your private data stays on premise s However unlike server based on premise s installations MEC is a fully automated edge cloud environment with centralized management Role of MEC in 5G MEC enable s ultra low latency use cases specified as part of the 5G network goals MEC also enables fast delivery of data and the connection of billions of devic es while allowing for cost economization related to transporting enormous volumes of data from user devices and IoT over the backhaul network It is important to note that MEC is currently deployed in 4G networks By deploying this standard based technolo gy in existing networks communication service providers can benefit from MEC today while creating an evolutionary path to their next generation 5G network ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 4 Mobile Edge Solution Overview Saguna has developed a MEC virtualized radio access network (vRAN) solution that runs on Hewlett Packard Enterprise (HPE) edge infrastructure This solution lets application developers create mobile edge applications using AWS services while allowing mobi le operators to effectively deploy MEC and operate edge applications within their mobile network Figure 1: End toend MEC solution architecture The proposed m obile edge solution consists of three main layer s as illustrated in Figure 1: • Edge Infrastructure Layer – Based on the powerful x86 compute platform this layer provides compute storage and networking resources at edge locations It supports a wide range of deployment options from RAN base d station sites to backhaul aggregation sit es and regional branch offices • MEC Layer – This layer lets you place an application within a mobile access network and provides a number of services including mobile traffic breakout and steering registration and certification services for application s deployed at the e dge and radio network information services It also provides optional integration point s with mobile core network services such as charging and lawful intercept ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 5 • Application Enablement Layer – This layer provides tools and frameworks to build deploy and maintain edge assisted application s This layer allows you to place certain application modules locally at the edge (eg latency critical or bandwidth hungry components) while keeping other application functions in the cloud The flexible design inherent in the MEC solution architecture allows you to scale the edge component to fit the needs of concrete use cases You can deploy t he edge component at the deepest edge of mobile network (eg colocated with eNodeB equipment at a RAN site) which lets you to deploy lowlatency and bandwidth demanding application components in close proximity to end devices You can also deploy an edge component at any traffic aggregation point between a base station and mobile core which allows you to serve traffic from multiple base stations The proposed m obile edge platform provides a variety of tools to build deploy and manage edge assisted applications such as: • Development libraries and frameworks spanning edge tocloud including function asaservice at the edge and cloud AI frameworks for creating and training models in the cloud seamless deployment and inference at the edge and communication brokerage between edge application services and cloud These development libraries and frameworks expose well defined A PIs and have been widely adopt ed in the developer community shortening the learning curve and accelerating time tomarket for edge assisted applications and use cases • Tools to automate deployment and life cycle management of edge application component s throughout massively distributed edge infrastructure • Infrastructure services such as virtual infrastructure services at the edge traffic steering policies at the edge DNS services radio awareness services integration of edge platform into overall netwo rk function virtualization ( NFV ) framework of mobile operator • Diverse compute resources fitted to the particular needs of edge application such as CPU GPU for acceleration of graphics intensive or AI workloads FPGA accelerators cryptographic and data compression accelerators etc ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 6 This unique combination of functionalities lets you quickly develop edge applications de ploy and manage edge infrastructure and applications at scale and lets you achieve a fast time tomarket with edge enabled use cases Example Reference Architectures for Edge Applications A mobile edge platform enables new app lication behaviors By adding the ability to run certain components and application logic at the mobile network edge in close proximity to the user devices/c lient s the mobile edge platform allows you to reengineer the functional split between c lient and application server s and enables a new generation of application experiences The following list provide s examples of possible mobile edge computing applications in industrial automotiv e public and consumer domains : • Industrial o Next generation augmented reality ( AR) wearables (eg s mart glasses) o IoT for a utomation predictive maintenance o Asset tracking • Automotive o Driverless cars o Connected vehicle tovehicle or vehicle toinfrastructure (V2X ) • Smart Cities o Surveillance cameras o Smart parking o Emergency response managemen t • Consumer Enhanced Mobile Broadband o Next generation Augmented Reality/Virtual Reality ( AR/VR) and video analytics o Social media highbandwidth media sharing ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 7 o Live event streaming o Gaming In the following sections we provide examples of how the mobile edge solution can be implemented for smart city surveillance AR/VR edge applications and Connected V2X Smart City Surveillance Cities can take advantage of IoT technologies to increase the safety security and overall quality of life for residents and keep operational costs down For example video recognition technology enables real time situational analysis (also called “video as a sensor”) which allow s you to detect a variety of objects from video feed (eg people vehicles personal items ) recognize the overall situation (eg a traffic jam fight trespassing and abandoned objects) and classify recognized objects (eg faces license plates) The mobile edge solution enables new abilities in building robust and cost efficient smart city surveill ance systems: • Efficient video processing at the edge – Computer vision systems in general require high quality video input (especially for extracting advanced attributes) and hardware acceleration of inference models The mobile edge solution lets you host a computing environment at the network edge This lets you offload backhaul networks and cloud connectivity from bandwidth hungry high resolution video feeds and allows lowlatency actions based on recognition results (eg opening gates for recognized vehicles or people controlling traffic with adaptive traffic lights) The mobile edge platform provides industry standard GPU resources to accelerate video recognition and any other artificial intelligence ( AI) models deployed at the edge • Flexible access network – End toend smart city surveillance system s might leverage different means to generate video input such as existing fixed surveillance cameras mobile wearable cameras (eg for law enforcement services or first responders) and drone mounted mobile surveillance The diversity of endpoints generating video input requires a high degree of flexibility from access network – leveraging fixed video networks and mobile cellular network s with native mobility support for wearable or unmanned aerial vehicle (UAV )mounted ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 8 cameras Additionally automated drone mounted system s require low latency access to control the flight of the drone which might require endtoend latencies of millisecond scale The mobile edge platform provides a means to use robust lowlatency cellular access with native mobility support for the latter cases and incorporate s existing fixed video networks • Flexible video recognition models – Robust video recognition AI model s usually require extensive training on sample sets of objec ts and events as well as periodic tuning (or development of models for extracting some new attributes) These compute intensive tasks use highly scalable lower cost compute cloud resources However seamless deployment of the trained models to the edge f or execution and managing the life cycle of the deployed models is a complex operational task The mobile edge platform provides seamless development and operational experience starting from creation training and tuning an AI model in the cloud to depl oying it at edge locations and managing the lifecycle of the deployed models The following diagram shows an example architecture of a smart city surveillance edge application: Figure 2: Edge assisted smart city surveillance application A smart city surveillance solution has three main domains: • Field domain – D iverse ecosystem of video producing devices eg body worn cameras from first responder units drones fixed video ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 9 surveillance systems and wireless fixed cameras Video feeds are ingested int o the mobile edge platform via cellular connectivity and use existing video networks • Edge sites – L ocated in close proximity to the video generating devices and host latency sensitive services ( eg UAV flight control local alerts processing) bandwidth hungry compute intensive applications (edge inference) and gateway functionalities for video infrastructure control (camera management) Video services extract target attributes from the video streams and share metadata with local alerting services and cloud services Video services at the edge can also produce low resolution video proxy or sampling video s for transferring only the video s of interest to the cloud • Cloud domain – H osts centralized non latency critical functions such as device and service management functions AAA and policies command and control center functions as well as compute intensive non latency critical tasks of AI model training You can augment a MEC smart city surveillance application with machine learning (ML) and inference models via: • Model training (for surveillance patterns of interest eg facial recognition person counts dwell time analysis heat maps activity detection) using deep learning AMIs on the AWS Cloud • Deployment of trained models to the MEC platform’s application container using AWS Greengrass and Amazon Sage Maker • Application of inference logic (eg alerts or alarms based on select pattern detection) using AWS Greengrass ML inference Figure 3: Detailed view of solution for smart city surveillance application ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 10 This design approach based on the mobile edge platform is a costefficient way of building and operating a s mart city surveillance system with edge processing for bandwidth hungry and laten cysensitive services AR/VR Edge Applications AR/VR is one of the use cases that benefits most from a mobile edge platform AR/VR edge applications can benefit from the m obile edge platform in the following ways: • Next generation AR wearables Current immersive AR experiences require heavy processing on the client side (eg calculating head and eye position and motion information from tracking sensors rendering of high quality 3 D graphics for the AR experience and running video recognition models) The requirement to run heavy computation s on AR device s (eg head mounted display s smart glasses smartphone s) has influenced the characteristics of the se devices —cost size weight battery life and overall aesthetic appeal Figure 4 : Nextgenerat ion AR devices You can avoid b ulkiness cost weight ergonomic and aesthetic limitations on the devices by offloading the heaviest computational tasks from the device s to a remote server or cloud However a truly immersive AR experience requires keeping coherence between AR content and the surrounding physical world with an end toend latency below 10 ms which is unachievable by offload ing to a traditional centralized cloud The m obile edge platform provides compute power at the network edge which allows you to offload latency critical functions from the AR device to the ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 11 network and enables the next generation of lightweight compact devices with long er battery life and native mobility • Mission critical operations AR experiences have been valuable in workforce enablement applications with remote collaboration applications AR assisted maintenance in the industrial space etc In many cases those AR experience s have become an important part of mission critical operations for example ARassisted mainte nance of equipment in hazardous conditions (eg oil extraction sites refineries and mines ) and in ARassisted healthcare Those use cases require high reliability from the AR application even when global connectivity from the c lient to the server side is degrad ed or broken The m obile edge platform provides the capability to re engineer an AR application in a way that the solution can operate offline with critical components deployed both locally in close proximity to devices and globally in the cloud as a fallback option • Localized data processing In many cases AR devices combine data from different local sources ( eg adding live sensor readings from a local piece of equipment to an AR maintenance application) In many cases ingesting data into th e cloud requires high bandwidth and is governed by data security or privacy frameworks A true AR experience requires localized data processing and ingest The m obile edge platform allows you to ingest data from any local source into the AR application as well as execute commands from the AR application to the local data sources (eg perform equipment maintenance tasks) The following diagram shows an example archit ecture for an AR edge application ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 12 Figure 5: Edge assisted AR application The edge assisted AR application has three main domains: • Ultra thin client (eg head mounted display) – G enerates sensor readings of head and eye position location and other relevant data such as live video feed from embedded cameras • Edge services – Part of an AR backend hosted in close proximity to the client on network side These services execute latency critical functions (computing positioning and tracking from AR sensor readings AR graphics rendering) bandwidth hungry functions (eg computer vision models for video recognition) and local data (processing of IoT sensor readings from localized equipment) • Cloud services – Part of AR backend hosted in a traditional centralized cloud These services execute functions centralized in nature (eg authentication and policies command and control center and AR model repository) resource hungry non latency critical functions (computer vision model training) and horizontal cross enterprise functions (eg data lakes integration points with other enterprise systems etc ) This design approach allows client s to offload heavy computations which makes client devices cost efficient lightweight and battery efficient This design also allows local data to be ingested from external sources and contro ls actions to local systems enables offline operation saves costs of WAN connectivity and secures compliance with potential data localization guidelines By working as an integrated part of the mobile network this use case natively supports global mobility telco grade reliability and security ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 13 Connected Vehicle (V2X) Connectivity between vehicles pedestrians roadside infrastructure and other elements in the environment is enabling a tectonic shift in transportation T he full promise of V2X solu tions can only be realized with a new generation of mobile edge applications : • Transportation safety – V2X promises the ability to coordinate actions between vehicles sharing the road (T his ability is sometimes called “Cooperative Cruise Control ”) Informa tion exchange between connected vehicles about intention to change speed or trajectory can significantly improve the safety and robustness of automated or autonomous driving through cooperative maneurvering However due to the very dynamic nature of car traffic these decisions must be made in near real time (with end toend latencies on a millisecond time scale) The m assively distributed nature of road infrastructure near realtime decision making and the requirements for hi ghspeed mobility make the mobile edge platform perfect for host ing the distributed logic of cooperative driving • Transportation efficiency – Cooperative driving promises not only increase d safety o n the road but also a significant boost in transportation efficiency With coordinated vehicle maneuvers the overall capacity of road infrastructure can increase without significant investment in road reconstruction The promise of higher transportation efficiency is further supported by v ehicle toinfrastructure solutions Vehicles can communicate with roadside equipment for speed guidance to coordinate traffic light changes and to reserve parking lots While some information requires only short range communication (eg from a vehicle to a r oadside unit) the coordinated actions of a distributed infrastructure (eg coordinating traffic light changes between multiple intersections) req uires the mobile edge platform to host the logic • Transportation experience – With autonomous driving technologies car infotainment system s are becoming more widespread The mobile edge platform enables the unique possibility of massively distributed content caching with high localization and context awareness as well as the ability to enable location and context based inter actions with vehicle passengers (eg guidance about local ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 14 attractions for travelers time and location limited promotions from local vendors etc) The following diagram shows an example architecture of a V2X edge application Figure 6: Edge assisted connected vehible (V2X) application The V2X solution has three main domains: • Field domain – V ehicle s that generat e data about intended driving maneuvers (eg braking lane change s turn s acceleration) and receive notifications from surroun ding vehicles Road infrastructure that includes all sensors and actuators that are relevant to the driving experience ( eg wind and temperature sensors street lighting connected traffic lights that are controlled via gateway devices such as Road Side Unit) • Edge sites – L ocated in close proximity to the road (eg respective RAN eNodeB sites) and host latency sensitive or highly localized V2X application services Examples of those services include processing and relaying driving maneuver notification s for vehicle coordination processing local sensor readings from road infrastructure dynamic generation of control commands to road infrastructure (eg coordinated traffic lights across several intersections) and caching highly localized infotainment content • Cloud domain – Host s centralized and non latency critical functions such as AAA and policy control historical data collection and ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 15 processing command and control center functions and centralized infotainment content origin With this design approach you can realize low latency and a coordinated exchange of data and control commands between vehicles and surrounding infrastructure This provides a highly specific context for every interaction Conclusion Many technological and market developments are converging to create an opportunity for new applications that take advantage of modern mobile networks and the edge access infrastructure This paper emphasizes the need for an application enablement ecosystem approach and presents a platform to serve multiple edge use cases Contributors The following individuals and organizations contributed to this document: • Shoma Chakravarty WW Technical Leader Telecom Amazon Web Services • Tim Mattison Partner Solution s Architect Amazon Web Services • Alex Rez nik Enterprise Solution Architect and ETSI MEC Chair HPE • Rodion Naurzalin Lead Architect Edge Solutions HPE • Tally Netzer Marketing Leader Saguna • Danny Frydman CTO Saguna Appendix This Appendix gives a more detailed overview of the functional components of the proposed m obile edge platform solution as well as technical characteristics of each component Figure 7 illustrates a functional diagram of the mobile edge platform: ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 16 Figure 7: Mobile edge platform functional diagram Infrastructure Layer The physical infrastructure for a MEC node is based on an edge optimized converged HPE Edgeline EL4000 platform (Figure 8 ) Figure 8: HPE Edgel ine EL4000 chassis and four m710x cartridges The end toend MEC solution gives you the ability to place workloads within any segment of your mobile access network for example at a RAN site backhaul aggregation hub or CRAN hub T he HPE Edge line EL4000 has been optimized for the MEC solution as follows : Compute Density The Edge line EL4000 hosts up to four hot swap SoC cartridges in 1U chassis providing up to 64 Xeon D cores with optimized price/core and watt/core characteristics That design provides 2x – 3x higher compute density compared to a typical traditional data center pl atform while keeping power consumption low These characteristics allow an operator to place a MEC node based on Edge line EL4000 at the deepest edge of access network down to a RAN site ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 17 where space and power constrain ts make other general purpose compute platforms inefficient Workload Specific Compute The diversity of MEC use cases requires that the underlying infrastructure be able to provide different types of compute resources The Edge line EL4000 platform provides diverse compute and hardware acceler ation capabilities which allows you to co locate workloads with different compute needs: • x86 processors that serve general workloads Typical workload example s include a Virtual Network Function virtualized edge application enablement platform and applications that provide fast control actions at the edge for low latency use cases • Builtin GPU that accelerat es graphics processing Typical workload example s are video transcoding at the edge for MEC assisted content distribution and 3 D graphics rendering at the edge for AR/VR streaming application • Plug in dedicated GPU cards that accelerat e deep learning algorithms Enabled by strategic partnership with NVIDIA the Edge line platform can be used for deep learning hardware acceleration at the edge Ty pical workload example s include video analytics and computer vision at the edge and ML inference at the edge for anomaly detection and predictive maintenance • Builtin acceleration of cryptographic operations with QuckAssist Technology (eg accelerating cryptographic or data compression workloads) • Support of up to four PCI E extension slots in a single chassis which provides options for specialized plug in units such as dedicated FPGA boards neuromorphic chips etc Such specialized hardware accelerati on is being evaluated for many network function workloads (such as RAN baseband processing) and applications (efficient deep learning inference) Physical and Operational Characteristics A MEC node should be ready to operate at physical sites and is traditionally used for hosting telco purpose built appliances that are optimized for the physical site environment (eg radio base station equipment at RAN sites ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 18 access routers at traffic hubs etc ) The operational environment of the MEC node sites may be very different from the traditional data center with limited physical space for equipment hosting consumer grade climate control and limited physical accessibility The Edge line EL4000 is optimized to operate in such environments with operational characteristics comparable to the telco purpose built appliances: Parameter RAN Baseband Appliance Typical Data C enter Platform Edge line EL4000 Operating Temperature (oC) +0 …+50 +10 … +35 0 … +55 NonDestructive Shock Tolerance (G) 30 2 30 Expected Mean Time Between Failures ( MTBF ) (years) 3035 1015 >35 On top of enhanced operational characteristics the Edge line EL4000 exposes open iLO interface for the management of highly distributed infrastructure of MEC nodes The iLO interface is compliant with RedFish industry standard It exposes infrastructure management functions via simple RESTful service Saguna OpenRAN C omponents Overview The MEC p latform layer is based on the Saguna OpenRAN solution and consists of the following functions: • Saguna vE dge function located within MEC n ode • Saguna vGate function (optional) located at the core network site • Saguna OMA function (optional) located within a MEC node or at the aggregation point of several MEC n odes Saguna vEdge resides in the MEC node and enables services and applications to operate inside the mobile RAN by providing MEC services such as registration and certification Traffic Offload Function (TOF) real time Radio Network Information Services (RNIS) and optional DNS services The virtualized software node is deployed in the RAN on a server at a RAN site or aggregation point of mobile backhaul traffic It may serve single or multiple ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 19 eNodeB base stations and small cells It can easily be extended to support WiFi and other communications standards in heterogeneous network (HetNet) deployments Saguna vEdge taps the S1 interface (GTP U and S1 AP protocols) and steers the traffic to the appropriate local or remote endpoint based on configured policies Saguna vEdge implements local LTE traffic steering in number of modes (inline steering breakout tap) It has a communication link that connects it to the optional Saguna vGate node using Saguna’s OPTP (Open RAN Transport Protocol) It exposes open REST APIs for managing the platform and providing platform services to the MEC assisted applications Saguna vGate is an optional component that resides in the core network It is responsible for preserving core functionality for RAN generated traffic: l awful interception (LI) charging and policy control The Saguna vGate also enables mobility support for session generated by an MEC assisted application Operating in a v irtual machine Saguna vGate is adjacent to the enhanced packet core (EPC) It has a communication link that connects it to the Saguna vEdge nodes using Saguna’s OPTP (Open RAN Transport Protocol) and m obile network integrations for LI and charging functions Saguna OMA (Open Management and Automation) is an optional subsystem that resid es in the MEC n ode or at the aggregation p oint of several MEC n odes It provides a management layer for the MEC nodes and integrates into the cloud Network Function Virtualization ( NFV ) environment which includes the NFV Orchestrator the Virtual Infrastructure Manager (VIM) and Operations Support Systems (OSS) Saguna OMA provides two management modules: • Virtualized Network Function Manager (VNFM) Provides Life Cycle Management and monitoring for MEC Platform (Saguna vEdge) and MEC assisted applications This is a standard layer of management required within NFV environments It resides at the edge to manage the local MEC environment ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 20 • Mobile Edge Platform Manager (MEPM) – Provides an additional layer of management required for operating and prioritizing MEC applications It is re sponsible for managing the rules and requirements presented by each MEC application rules and resolving conflicts between different MEC assisted applications The Saguna OMA node operates on a virtual machine and manages on boarded MEC assisted application s via its workflow engine using Saguna and third party plugins The Saguna OMA is managed via REST API Saguna OpenRAN Services As a MEC p latform layer Saguna OpenRAN provides the following services: Mobile Network Integration Services • Mobility with Internal Handover support for mobility events between cells connected to the same MEC n ode and External Handover support between two or more MEC n odes and between cells connected to a MEC node and unconnected cells • Lawful Interception (LI) for RAN based generated data It supports X1 (Admin) X2 (IRI) and X3 (CC) interfaces and is pre integrated with Utimaco and Verint LI systems • Charging support using CDR generation for application based charging (based on 3GPP TDF CDR) and charging triggering based on time session and data Supported charging methods are File based (ASN1) and GTP’ • Management vEdge REST API for MEC services discovery and registration MEPM and VNFM let you efficiently operate a MEC solution and integrate it into your existing NFV en vironment Edge Services • Registration for MEC assisted applications The MEC Registration service provides dynamic registration and certification of MEC applications and registration to other MEC services provided by the MEC Platform setting the MEC appli cation type • Traffic Offload Function routes specific traffic flows to the relevant applications as configured by the user The TOF also handles tunneling ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 21 protocols such as GPRS Tunneling Protocol (GTP) for Long Term Evolution (LTE) network Standard A10/A 11 interfaces for 3GPP2 CDMA Network and handles plain IP traffic for WiFi/DSL Network • DNS provides DNS caching service by storing recent DNS addresses locally to accelerate the mobile i nternet and DNS server functionality preconfiguring specific DNS responses for specific domains This lets the User Equipment ( UE) connect to a local application for specific TCP sessions • Radio Network Information Service provided per Cell and per Radio Access Bearer (RAB) The service is vendor independent and can support eNodeBs from multiple RAN vendors simultaneously It supports standard ETSI queries (eg cell info) and notification mechanism (eg RAB establishment events) Additional information based on Saguna proprietary model provides real time feedback on cell congestion level and RAB available throughput using statistical analysis • Instant Messaging with Short Message Service (SMS) provided as a REST API request It offers smart messaging capabilities for example sending SMS to UEs on a specific area ( eg sports stadium) or sending SMS to UE when entering or exiting a specific area (eg shop) Mobile Edge Applications • Throughput guidance application uses the internal RNIS algorithm to deliver throughput guidance for specific IP addresses on the server side or according to domain names of the servers The application can be configured with the period of such Throughput Guidance update per target • DDoS Mitigation application monitors traffic originating from the connected device for specific DDoS attacks on different layers (IP layer for ICMP flooding IP scanning Ping of death; TCP/UDP layer for TCP sync attacks UDP message flooding; Application layer) Devices that are detected as generating DDoS traffic are reported to the network management and traffic from these devices can be locally stopped or the device can be remotely disabled by the network core ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 22 Application Enablement L ayer The Application Enablement layer consists of AWS Greengrass hosted on the MEC node side AWS Greengrass is designed to support IoT solutions that connect different types of devices with the cloud and each other It also runs local functions and parts of applications at the network edge Devices that run Linux and support ARM or x86 architectures can host the AWS Greengr ass Core The AWS Greengrass Core enables the local execution of AWS Lambda code messaging data caching and security Devices running the AWS Greengrass Core act as a hub that can communicate with other devices that have the AWS IoT Device SDK installed such as micro controller based devices or large appliances These AWS Greengrass Core devices and the AWS IoT Device SDK enabled devices can be configured to communicate with one another in a Greengrass Group If the AWS Greengrass Core device loses connection to the cloud devices in the Greengrass Group can continue to communicate with each other over the local network A Greengrass Group represents localized assembly of devices For example it may represent one floor of a building one truck or one home AWS Greengrass builds on AWS IoT and AWS Lambda and it can also access other AWS services It is built for offline operation and greatly simplifies the implementation of local processing Code running in the field can collect filter and aggregate fr eshly collected data and then push it up to the cloud for long term storage and further aggregation Further code running in the field can also take action very quickly even in cases where connectivity to the cloud is temporarily unavailable AWS Greengr ass has two constituent parts : the AWS Greengrass Core and the IoT Device SDK Both of these components run on onpremises hardware out in the field The AWS Greengrass Core is designed to run on devices that have at least 128 MB of memory and an x86 or ARM CPU running at 1 GHz or better and can take advantage of additional resources if available It runs Lambda functions locally interacts with the AWS Cloud manages security and authentication and communicates with the other devices under its purview ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 23 The IoT Device SDK is used to build the applications on devices connected to the AWS Greengrass Core device (generally via a LAN or other local connection) These applications capture data from sensors subscribe to MQTT topics and use AWS IoT device shadows to store and retrieve state information AWS Greengrass features include : • Local support for AWS Lambda – AWS Greengrass includes support for AWS Lambda and AWS IoT d evice shadows With AWS Greengrass you can run AWS Lambda functions right on the device to execute code quickly • Local support for AWS IoT d evice shadows – AWS Greengrass also includes the functionality of AWS IoT d evice shadows The d evice shadow caches the state of your device like a vi rtual version or “shadow” and tracks the device’s current versus desired state • Local messaging and protocol adapters – AWS Greengrass enables messaging between devices on a local network so they can communicate with each other even when there is no connection to AWS With AWS Greengrass devices can process messages and deliver them to other device s or to AWS IoT based on business rules that the user defines Devices that communicate via the popular industrial protocol OPC UA are supported by the AWS Gr eengrass protocol adapter framework and the out ofthebox OPC UA protocol module Additionally AWS Greengrass provides protocol adapter framework to implement support for custom legacy and proprietary protocols • Local resource access – AWS Lambda functions deployed on an AWS Greengrass Core can access local resources that are attached to the device This allows you to use serial ports USB peripherals such as add on security devices sensors and actuators on board GPUs or the local file system to quickly access and process local data • Local machine learning i nference – A llows you to locally run a n MLmodel that’s built and trained in the cloud With hardware acceleration available in the MEC infrastructure layer this feature provides a powerful mec hanism for solving any machine learning task at the local edge eg discovering patterns in data building computer vision systems and running anomaly detection and predictive maintenance algorithms ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 24 AWS Greengrass has a growing list of features Curren t features are shown in Figure 9 Figure 9: AWS Greengrass features AWS Greengrass on the MEC node acts as a pivot point It integrates the MEC platform with the AWS I oT solution and other AWS services providing a powerful application enablement environment for developing deploying and managing MEC assisted applications at scale The figure below illustrates the current portfolio of AWS services that enable a seamless IoT pipeline —from endpoints connecting via Amazon FreeRTOS or the IoT SDK through MQTT or OPC UA to edge gateways that host AWS Greengrass and Lambda functions providing data processing capabilities at the edge up to cloud hosted AWS IoT Core AWS Device Management AWS Device Defender and AWS IoT Analytics services as well as enterprise applications ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 25 Figure 10: AWS services that enable a seamless IoT pipeline 1 In a telecommunications network the backhaul portion of the network comprises the intermediate links between the core network or backbone network an d the small subnetworks at the "edge"
General
Amazon_Elastic_File_System_Choosing_Between_Different_Throughput_and_Performance_Mode
This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Elastic File System Choosing Between the D ifferent Throughput & Performance Modes July 2018 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers © 201 8 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representat ions contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify a ny agreement between AWS and its customers This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Contents Introduction 1 Performance Modes 1 General Purp ose 1 Max I/O 1 Selecting the right performance mode 2 Throughput Modes 3 Bursting Throughput 3 Provisione d Throughput 4 Selecting the right throughput mode 5 Conclusion 6 Contributors 6 Further Reading 7 Document Revisions 7 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Abstract Storage types can generally be divided in to three different categories : block file and object Each storage type has made its way into the enterprise and a large majority of data reside s on file storage Network shared file systems have become a critical storage platform for businesses of any size These systems are accessed by a single client or multiple (tens hundreds or thousands) concurrently so they can access and use a common data set Amazon Elastic File System (Amazon EFS) satisfies these demands and gives custom ers the flexibility to choose different performance and throughput modes that best suits their needs This paper outlines the best practices for running network shared file systems on the AWS cloud platform and offers guidance to select the right Amazon EF S performance and throughput modes for your workload This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Page 1 Introduction Amazon Elastic File System (Amazon EFS)1 provides simple scalable elastic file storage for use with AWS Cloud services and on premises resources The file systems you create using Amazon EFS are elastic growing and shrinking automatically as you add and remove data They can grow to petabytes in size distributing data across an unconstrained number of storage se rvers in multiple Availability Zones Amazon EFS supports Network File System version 4 (NFSv40 & 41) provides POSIX file system semantics and guarantees open after close semantics Amazon EFS is a regional service built on a foundation of high availability and high durability and is designed to satisfy the performance and throughput demands of a wide spectrum of use cases and workloads including web serving and content management enterprise applications media and entertainment processing workflows home directories database backups developer tools container storage and big data analytics EFS file systems provide customizable performance and throughput options so you can tune your file system to match the needs of your application Performance Modes Amazon EFS offers two performance modes: General Purpose and Max I /O You can select one when creating your file system There is no price difference between the modes so your file system is billed and metered the same The performance m ode can’t be changed after the file system has been created General Purpose General Purpose is the default performance mode and is recommended for the majority of uses cases and workloads It is the most commonly used performance mode and is ideal for latency sensitive applications like web serving content management systems and general file serving These file systems experience the lowest latency per file system operation and can achieve this for random or sequential IO patterns There is a limit of 7000 file system operation per second aggregated across all clients for General Purpose performance mode file systems Max I /O File systems created in Max I /O performance mode can scale to higher levels of aggregate throughput and operations per second when compared to General Purpose file systems These file systems are designed for highly parallelized applications like big data a nalytics video transcoding a nd processing and genomic analytics which can scale out to tens hundreds or thousands of Amazon EC2 instances Max I /O file systems do not have a 7000 file system operation per second limit but latency per file system operation is slightly higher when compared to General Purpose performance mode file systems This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Elastic File System : Choosing between the different Performance & Throughput Modes Page 2 Selecting the right performance mode We recommend creating the file system in the default General Purpose performance mode and testing your workload for a period of time to test its performance We pr ovide eight Amazon CloudWatch metrics per file system to help you understand how your workload is driving the file system One of these metrics Percen tIOLimit is specific to General Purpose performance mode file systems and indicates as a percent how close you are to the 7000 file system operations per second limit If the PercentIOLimit value returned is at or near 100 percent for a significant amount of time during your test (see figure 1) we recommend you use a Max I /O performance mode file system To move to a different performance mode you migrate the data to a different file system that was created in the other performance mode You can use Amazon EFS File Sync to migrate the data For more information on Amazon EFS File Sync please refer to the Amazon EFS File Sync section of the Amazon EFS User Guide 2 There are some workloads that need to scale out to the higher I/O levels provided by Max I /O performance mode but are also latency sensitive and require the lower latency provided by General Purpose performance mode In situations like this and if the work load and Figure 1 Figure 2 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Elastic File System : Choosing between the different Performance & Throughput Modes Page 3 applications support it we recommend creating multiple General Purpose performance mode file systems and spread the application workload across all these file systems This would allow you to create a logical file system and shard data across multiple EFS file systems Each file system would be mounted as a subdirectory and the application can access th ese subdirectories in parallel (s ee figure 2 ) This allows latency sensitive workload s to scale to higher levels of file system operatio ns per second aggregated across multiple file systems and at the same time take advantage of the lower latencies offered by General Purpose performance mode file systems Throughput Modes The throughput mode of the file system helps determine the overal l throughput a file system is able to achieve You can select the throughput mode at any time (subject to daily limits) Changing the throughput mode is a nondisruptive operation and can be run while clients continuously access the file system You can c hoose between two throughput modes Bursting or Provisioned There are price and throughput level differences between the two modes so understand ing each one their differences and when to select one throughput mode over the other is valuable Bursting Throughput Bursting Throughput is the default mode and is recommended for a majority of uses cases and workloads Throughput sc ales as your file system grows and you are billed only for the amount of data stored on the file system in GB Month Because file based workloads are typically spiky – driving high levels of throughput for short periods of time and low levels of throughput the rest of the time – file systems using Bursting Throughput mode allow for high throug hput levels for a limited period of time All Bursting T hroughput mode file systems regardless of size can burst up to 100 MiB/s Throughput also scales as the file s ystem grows and will scale at the bursting throughput rate of 100 MiB/s per TiB of data stored subject to regional default file system throughput limits These bursting throughput numbers can be achieved when the file system has a positive burst credit balance You can monitor and alert on your file system’s burst credit balance using the BurstCreditBalance file system metric in Am azon CloudWatch File systems earn burst credits at the baseline throughput rate of 50 MiB/s per TiB of data stored and can accumulate burst credits up to the maximum size of 21 TiB per TiB of data stored This allows larger file systems to accumulate and store more burst credits which allows them to burst for longer periods of time If the file system’s burst credit balance is ever depleted the permitted throughput becomes the baseline throughput Permitted throughput is the maximum amount of throughput a file system is allowed and this value is available as an Amazon CloudWatch metric This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Elastic File System : Choosing between the different Performance & Throughput Modes Page 4 Provisioned Throughp ut Provisioned T hroughput is available for applications that require a higher throughput to storage ratio than those allowed by Bursting Throughput mod e In this mode you can provision the file system’s throughput independent of the amount of data stored in the file system This allows you to optimize your file system’s throughput performance to match your application’s needs and your application can dr ive up to the provisioned throughput continuously This concept of provisioned performance is similar to features offered by other AWS services like provisioned IOPS for Amazon Elastic Block Store PIOPS (io1) volumes and provisioned throughput with read a nd write capacity units for Amazon DynamoDB As with these services you are billed separately for the performance or throughput you provision and the storage you use eg two billing dimensions When file systems are running in Provisioned Throughput mod e you are billed for the storage you use in GB Month and for the throughput provisioned in M iB/sMonth The storage charge for both Bursting and Provisioned Throughput modes includes the baseline throughput of the file system in the price of storage Thi s means the price of storage includes 1 MiB/s of throughput per 20 GiB of data stored so you will be billed for the throughput you provision above this limit For more information on pricing see the Amaz on EFS pricing page 3 You can increase Provisioned T hroughput as often as you need You can decrease Provisioned Throughput or switch throughput modes as long as it’s been more than 24 hours since the last decrease or throughput mode change File systems continuously earn burst credits up to the maximum burst credit balance allowed for the file system The maximum burst credit balance is 21 TiB for file systems smaller than 1 TiB or 21 TiB per TiB stored for file systems larger than 1 TiB File systems running in Provisioned Throughput mode still earn burst credits They earn at the higher of the two rates either the P rovisioned Throughput rate or the baseline Bursting Throughput rate of 50 MiB/s per TiB of storage You could find yourself in t he situation where your file system is running in Provisioned Throughput mode and over time the size of it grows so that its provisioned throughput is less than the baseline throughput it is entitled to had the file system been in Bursting Throughput mode In a case like this you will be entitled to the higher throughput of the two modes including the burst throughput of Bursting Throughput mode and you will not be billed for throughput above the storage price For example you set the provisioned throug hput of your 1 TiB file system to 200 MiB/s Over time the file system grows to 5 TiB A file system in Bursting Throughput mode would be entitled to a baseline throughput of 50 MiB/s per TiB of data stored and a burst throughput of 100 MiB/s per TiB of da ta stored Though your file system is still running in Provisioned Throughput mode its entitled to a baseline throughput of 250 MiB/s and a bu rst throughput of 500 MiB/s and will only incur a storage charge for a 5 TiB file system For information on maxi mum provisioned throughput limits please refer to the Amazon EFS Limits section of the Amazon EFS User Guide 4 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Elastic File System : Choosing between the different Performance & Throughput Modes Page 5 Selecting the right throughput mode We recommend running file systems in Bursting Throughput mode because it offers a simple and scalable experience that provides the right ratio of throughput to storage capacity for most workloads There are times when a file system needs a higher throughput to storage capacity ratio than what is offered by Bursting Throughput mode Knowing the throughput demands of your application or monitoring key indicators are two important ways in determining when you’ll need these higher levels of throughput We recommend using Amazon CloudWatch to monitor how your file system is performing One of these metrics BurstCreditBalance is a key performance indicator that will help determine if your file system is better suited for Provisioned Throughput mode If this value is zero or steadily decreasing over a period of normal operations (see figure 3 ) your file system is consuming more burst credits than it is earn ing This means your workload requires a throughput to storage capacity ratio greater than what is allowed by Bursting Throughput mode If this occurs we recommend provisioning throughput for your file system This can be done by modifying the file syste m to change the throughput mode using the AWS Management Console AWS CLIs AWS SDKs or EFS APIs When choosing to run in Provisioned Throughput mode you must also indicate the amount of throughput you want to provision for your file system To help dete rmine how much throughput to provision we recommend monitoring another key performance indicator available from Amazon CloudWatch TotalIOBytes This metric gives you throughput in terms of the total numbers of bytes (data read data write and metadata) for each file system operation during a selected period To calculate the average throughput in MiB/s for a period convert the Sum statistic to MiB (Sum of TotalIOBytes x 1048576) and divide by the number of seconds in the period Use Metric Math expres sions in Amazon CloudWatch to make it even easier to see throughput in MiB/s For more information on using Metric Math see Using Metric Math with Amazon EFS in the Amaz on EFS User Guide 5 Calculate this during the same period when your BurstCreditBalance metric was continuously decreasing This will give you the average throughput you were Figure 3 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Elastic File System : Choosing between the different Performance & Throughput Modes Page 6 achieving during this period and is a good starting point when choosing the amount of throughput to provision If your file system is running in Provisioned Throughput mode and you experience no performance issues while your BurstCreditBalance continuously increases for long periods of normal operations then consider decreasing the amount of provisioned throughput to reduce costs To help determine how much throughput to provision we also recommend monitoring the Amazon CloudWatch metric TotalIOBytes Calculate this during the same period when your BurstCredi tBalance metric was continuously increasing This will give you the average throughput you were achieving during this period and is a good starting point when choosing the amount of throughput to provision Remember you can increase the amount of provisio ned throughput as often as you need but you can only decrease the amount of provisioned throughput or switch thro ughput modes as long as it’s been more than 24 hours since the last decrease or throughput mode change If you’re planning on migrating large a mounts of data into your file system you may also want to consider switching to Provisioned Throughput mode and provision a higher throughput beyond your allotted burst capability to accelerate loading data Following the migration you may decide to lowe r the amount of provisioned throughput or switch to Bursting Throughput mode for normal operations Monitor the average total throughput of the file system using the TotalIOBytes metric in Amazon CloudWatch Use Metric Math expressions in Amazon CloudWatch to make it even easier to see throughput in MiB/s Compare the average throughput you’re driving the file system to the PermittedThroughput metric If the calculated average throughput you’re driving the file system is less than the permitted throughput then consider making a throughput change to lower costs If the calculated average throughput during normal operations is at or below the baseline throughput to storage capacity ratio of Bursting Throughput mode (50 MiB/s per TiB of data stored) then cons ider switching to Bursting Throughput mode If the calculated average throughput during normal operations is above this ratio then consider lowering the amount of provisioned throughput to some level in between your current provisioned throughput and the calculated average throughput during normal operations Remember you can switch throughput modes or decrease the amou nt of provisioned throughput as long as it’s been more than 24 hours since the last decrease or throughput mode change Conclusion Amazon EFS gives you the flexibility to choose different performance and throughput modes to customize your file system to meet the needs for a wide spectrum of workloads Knowing the performance and throughput demands of your appl ication and monitoring key performance indicators will help you select the right performance and throughput mode to satisfy your file system’s needs Contributors The following individuals and organizations contributed to this document: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Elastic File System : Choosing between the different Performance & Throughput Modes Page 7  Darryl S Osborne solutions architect Amazon File Services Further Reading For additional information see the following :  Amazon EFS User Guide6 Document Revisions Date Description July 2018 First publication 1 https://awsamazoncom/efs/ 2 https://docsawsamazoncom/efs/latest/ug/get started filesynchtml 3 https://awsamazon com/efs/pricing/ 4 https://docsawsamazoncom/efs/latest/ug/limitshtml 5 https://docsawsamazon com/efs/latest/ug/monitoring metric mathhtml 6 https://docsawsamazoncom/efs/latest/ug/whatisefshtml
General
5_Ways_the_Cloud_Can_Drive_Economic_Development
Archived5 Ways the Cloud Can Drive Economic Development August 2018 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchived © 201 8 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents Amazon Web Services’s (“AWS”) current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this docu ment and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances fro m AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Contents Introduction 1 Sharing More Data and Information 1 Increasing Productivity 3 Preparing Citizens for the Workforce & Building Skills 5 Driving Local Development 6 Allocating Resources More Effectively 8 Key Takeaway 9 Contributors 9 Archived Abstract Government agencies often look to promote new technology for cost savings and efficiency but it does not stop there The second and third tier effects of technology can be long lasting for citizens businesses and economies When public institutions adop t the cloud they experience an internal transformation Inside an organization cloud usage drives greater accessibility of data and information sharing increases worker productivity and improves resource allocation The external benefit of the cloud is recognized through a government ’s ability to put reclaim ed time and resource s toward serving citizens This includes provision ing public services such as occupational skills training quicker and more effective service delivery a pathway to a more productive workforce and ultimately a boost to local development This whitepaper examines the enterprise level benefits of the cloud as well as the residual impact on economic development The US Economic Development Administration defines economic development as “[creating] the conditions for economic growth and improved quality of life by expanding the capacity of individuals firms and communities to maximize the use of their talents and skills to support innovation lower transaction costs and responsibly produce and trade valuable goods and services” We explore this concept through the lens of the cloud ArchivedAmazon Web Services Inc – 5 Ways t he Cloud Can Drive Economi c Development Page 1 Introduction Technology empowers governments to improve how and when they reach citizens It improves the quality and accessibility of public service s ultimately creat ing a more productive environment where citizens can thrive Leveraging the cloud is one way governments can accelerate this shift with benefits occurring first inside the institution Sharing More Data and Information One enterprise level benefit of the cloud is its emphasis on data and information sharing The cloud ’s data sharing tools encourage staff to store information in a central location adding visib ility inside the workplace A more collaborative environment can lead to increased communication and idea sharing among agencies and teams that might otherwise op erate in siloes This is true for federal regional and local governments as well as for businesses and entrepreneurs The result is n ear real time access to critical information across an array of industries Examples include data on job creation by location and level retention statistics payroll by industry classification – or North American Industry Classification System code s in the US – in addition to information on health services trade and commerce weather patterns and more Data and IoT solutions can help address development challenges Nexleaf Analytics is one organization harnessing the power of data to tackle global development issues From climate change to public health and food insecurity its mission is to preserve hu man life and protect the planet through sensor technologies and data analytics and by advocating for data driven solutions The organizatio n developed Internet ofThings (IoT) platforms ColdTrace and StoveTrace to help governments ensure the potency of life saving vaccines at the ‘last mile’ and to facilitate the adoption of cleaner cookstoves respectively ArchivedAmazon Web Services Inc – 5 Ways t he Cloud Can Drive Economi c Development Page 2 “Data is at the core of creating sustainable change By getting meaningful real time data flowing from the bottom up people have the tools and insights they need to take responsive actions” according to Mar tin Lukac Nexleaf’s CTO and cofounder Nexleaf’s solution powered by A mazon Web Services Inc (AWS) aggregates crucial data that can lead to responsive interventions By collaborating with governments and NGOs in 10 countries across Asia and Africa the organization ensures its solutions adhere to local country laws and preferences and identifies the right tools and analytics to benefit constituents Engaging people on the ground empowers a data driven approach to improving the effic iency of their systems advocating for better resources and tap ping into potential avenues for economic and social development Data drives c ommunity collaboration and innovation The cloud encourages partnerships and collaboration within communities It can lead local governments to facilitate relationships with small and medium sized enterprises (SMEs) which according to an Organisation for Economic Co operati on and Development (OECD ) report “account for over 95% of firms and 60% 70% of employment and generate a large share of new jobs in OECD economies” In Boston Massachusetts the Mayor's Office of New Urban Mechanics took an innovative approach to proble msolving through crowdsourcing Teaming with a technology firm the government sought creative ideas from across Boston to help improve Street Bump its app to collect roadside maintenance and plan long term investments for the city The use of big data and community engagement helped the agency find a creative solution to a public issue Street Bump’s website now reports that te ns of thousand s of bumps have been detected through the app The public private partnership brought automation and speed to an otherwise manual city improvement process and also gave local startups a platform to voice and implement innovative ideas that otherwise may n ot have been discovered Newport Wales is another example of a city optimizing public data in this case to assess environmental conditions It began using IoT sensors to collect ArchivedAmazon Web Services Inc – 5 Ways t he Cloud Can Drive Economi c Development Page 3 data such as pollution levels augmenting earlier process es of collecting air samples in glass vials across 85 different location s Together with Pinacl Solutions and Davra Networks Newport is working toward a solution for improving air quality flood control and waste management gleaning timely insights from sensor data via solutions hosted on AWS The effort aimed to boost citizens’ safety and quality of life as part of a vision to improve Newport’s economy The Humanitarian OpenStreetMap Team (HOT) is yet another global organization applying the pri nciples of open source and open data shar ing to humanitarian response and economic development Known for its ability to rapidly coordinate volunteers to map sites impacted by disaster HOT relies on a collaboration with Digi talGlobe Inc for critical satellite imagery data accessible through its Open Data Program and imagery license If not for this partnership HOT would not exist as it is today according to HOT’s Director of Technology Cristiano Giovando Additionally through the AWS Public Datasets Program anyone can analyze data and build comple mentary services using a broad range of compute and data analytics tools The cloud combines fragmented data from a variety of sources improving users’ access and enabling more time for analysis This can facilitate innovation and the possibility of new discover ies Increasing Productivity Consistent r eliability and a lack of physical infrastructure can d rive productivity gains inside and out of a cloud using organization Workforce productivity can improve up to 50% following a large scale AWS migration according to AWS migration experts In addition AWS’s more than 90 solutions offers organizations faster access to services they would otherwise have to build and maintain themselves Government organizations around the world including a road and traffic agency in Belgium and Italy’s public finance regulator have realized increased productivity from the cloud – both for the benefit of their operation s and their citizens ArchivedAmazon Web Services Inc – 5 Ways t he Cloud Can Drive Economi c Development Page 4 Productivity gains help institutions better deliver on their mission The Agentschap Wegen & Verkeer (AWV) deploy s new maintenance capabilities up to eight times faster thanks to the automation of services and databases through the AWS Cloud according to Bert Weyne planning & coordination lead at AWV The agency manages 6970 kilometers of roads and 7668 kilometers of cycle lanes in Belgium with its team of 250 road i nspectors having a direct impact on citizen safety In the event of a pothole for example the team uses an app to log information about the issue and prioritize repairs “When we wer e running on in house servers our road inspectors complained about the app’s reliability At times they were unable to access the app and would have to use paper and p en instead It was embarrassing ” says Weyne In addition to bett er performance Weyne’s team has used the cloud to reduce costs speed development and cut infrastructure management time He adds “… by using managed services we’ve slashed system admin time by 67 percent which has improved our agility We can now dev elop and test features three times faster” The cloud has also enabled Italy’s auditing and oversight authority for public accounts and bu dgets to operate more effectively as a remote team Prior to working with AWS Corte dei conti (Cdc) felt constrained by physical IT infrastructure “We wanted to change the way our 3000 plus employees worked enabling them to access applications from anywh ere on any device But we had to ensure that this flexibility for staff didn’ t jeopardize the safety of data ” said C dc’s IT officer Leandro Gelasi This was attainable through a hybrid architecture migration approach and through collaboration with AWS Advanced Consulting Partner XPeppers Srl “As a result [employees are] much more productive Decisions get made faster and the whole system works better It’s a brilliant result fo r our entire organization” said Gelasi As Gelasi and his team prove their ability to fulfill duties securely from any location it may lend an opportunity to employ more workers in small towns and rural locations ArchivedAmazon Web Services Inc – 5 Ways t he Cloud Can Drive Economi c Development Page 5 Preparing Citizens for the Workforce & Building Skills Skilldevelopment and education programs offer meaningful contributions to economic development In line with the United Nation s’ 2030 Sustainable Development Goals which includes training and skill building for youth cl oud technology provisions the scaling of educational content and innovative teaching formats to reach learners wherever they are Quality inclusive and relevant education is a key factor in breaking cycles of poverty and reduc ing gender inequalities worldwide By expanding learning beyond the confines of a physical classroom technology helps increase access to courses and level s the playing field for learners of diverse geographical and socio economic backgrounds For schools and educators the cloud offers not only cost savings and agility but also the opportunity to develop breakthroughs in educational models and student engagement Reaching diverse job seekers where ver they are Digital Divide Data (DDD) is a nonprofit social enterprise that uses AWS to support regional workforce development Its goal is to create sustainable tech jobs for youth through Impact Sourcing a model that provides economically marginalized youth with training and jobs in next generation technologies such as cloud computing machine learning cyber security and data analytics In col laboration with Intel AWS worked with DDD to launch the first ofits kind AWS Cloud Academy in Kenya to train certify and employ underserved youth in cloud computing as a stepping stone to more advanced IT careers The program's first cohort included 30 hi gh school graduates from Kibera Nairobi with the second cohort compris ed of 70% women The social enterprise plan s to train five cohorts annually graduating 150200 clo ud engineers p er year – all of whom have the option to work for DDD as cloud computing engineers or to pursue cloud opportunities in the growing local tech sector In terms of workforce benefits DDD and AWS graduates earn five times more than their peers While i nformal workers in Kenya earn an average of $116 USD ArchivedAmazon Web Services Inc – 5 Ways t he Cloud Can Drive Economi c Development Page 6 per month AWS graduates earn an average of $575 USD per month The combination of training and w ork experience propels DDD graduates to earn higher income gain economic security and ultimately create better futures for themselves and their families In the US the Loui siana Department of Public Safety and Corrections manages nine state correctional facilities that house 19000 adult prisoners The state run agency offers educational and vocational programs with the goal of helping inmates earn degrees gain job training secure employment and avoid re incarceration The agency sought to implement a new IT environment that would support a better and more reliable online learning solution It also needed effective system security to prevent inmates from accessing the inte rnet amid concerns about victims’ safety and other criminal activity After opting for Amazon WorkSpaces – a managed secure desktop computing service on AWS – the agency along with partner ATLO Software succeeded in launching educational training labs at four Louisiana correctional facilities With the addition of an Amazon Virtual Private Cloud they were operating on a secure network Thanks to onsite labs inmates now have better access to vocational training have the opportunity to earn college credits or degrees and can potentially participate in the labor market Driving Local Development Retaining Local Talent Retaining local talent can be a challenge for cities Moreover a concentration of intellectual capital and innovative businesses and startups can be a strong indicator of economic development Cloud technology can help give new businesses a boost in their forecasting demand generation and innovation when bringing their products or services to market AWS accelerate s this process through AWS Activate a program designed to provide startups with resourc es and credits to get started with the cloud; through access to tools like Amazon LightSail which provides technology like virtual private servers to enterprises of all sizes for the cost of a cup of ArchivedAmazon Web Services Inc – 5 Ways t he Cloud Can Drive Economi c Development Page 7 coffee ; and by encouraging public private partnerships and small business linkages namely through the strength of the AWS Partner Network (APN) Additionally AWS CloudStart formed to encourage the growth of SMEs and economic development organizations by providing resources to educate train and help these entities embrace the cost effectiveness of the AWS Cloud “As small businesses leverage a broader portfolio of digital solutions they can see an increase in agility while simultaneously lowering costs and reducing time to innovation” according to Zandile Keebine found er of participating organization GirlCode a nonprofit that aims to empower girls through technology In the US Kansas City Missouri is one example of a city that is successfully using smart technology to attract talent to an emerging business center Along the two mile corridor of the Kansas City Streetcar a $15 million public private partnership supports the deployment of 328 Wi Fi access points and 178 smart streetlights that can detect traffic patterns and open parking spaces It has also funded 25 video kiosks pavement sensors video cameras and other devices all connected by the city’s nearly ubiquitous fiber optic data network The successful use of smart city technology has been a key component in bringi ng people back to Kansas City’s core “Ten years ago we had fewer than 5000 people living downtown” said Bob Bennett Kansans City’s chief innovation officer “We have seen a 520 percent growth in the number of residents in downtown and a 400 percent gr owth in development investment I believe our smart city project has played a prominent role in getting people excited about living here” Entrepreneur ship and p ublicprivate partnerships Cloud technology provides governments with the means to educate and train citizens boosting workforce participation and eligibility Driving local entrepreneurship is an important outgrowth of this investment “A vibrant entrepreneurial sector is essential to small firm development” according to the OECD It adds that regions with “pockets of high entrepreneurial activity” and public private partnerships can lead to more job opportunities and innovation ArchivedAmazon Web Services Inc – 5 Ways t he Cloud Can Drive Economi c Development Page 8 A municipality in Sweden is feeling the effects of a strategic partnership aimed at helping small bu sinesses adapt and thrive Consultant CAG Malardalen in Västerås Sweden uses the cloud to help constituents make more data driven decisions deploy resources more efficiently and help shape the economic conditions essential for attract ing new economic activity “[We are] striving to bring the region the latest in cloud technology Our ambition is to always deliver the most relevant IT solutions to our customers Through working with AWS CloudStart our customers benefit from the foundational knowledge we have gathered and we are already seeing a lot of new possibilities for us as a service provider across Sweden” says Tomas Täuber CEO of CAG Malardalen Allocating Resources More Effectively Cloud technology allows governments to rethink critical processes It builds new efficiencies across procurement security compliance and data protection Additionally the cost effectiveness of the cloud enables agencies to redirect resources toward advancing their mission freeing up capacity to create more innovative public services Increased access to new and better citizen services ushers in a higher standard of living offering the potential to draw new inhabitants to a city or region The cloud can act as a catalyst for this type of development driving organizations tow ard increased operational efficiencies and enabling a greater focus on the mission In the Middle East the Kingdom of Bahrain underwent a shift in how it procures resources in its plan to digitize its economy Using the cloud to efficiently deliver ser vices to constituents The Kingdom of Bahrain Information & eGovernment Authority (iGA) is accountable for moving all of its government services online It is responsible for information and communications technolog y (ICT) governance and procurement for the entire Bahraini government The iGA launched a cloud first policy to support its economic development plans ArchivedAmazon Web Services Inc – 5 Ways t he Cloud Can Drive Economi c Development Page 9 Bahrain’s adoption of a cloud first policy boosted efficiency across the public sector and trimmed IT e xpenditures by up to 90% in 2017 according to the Economic Development Board annual report “Through adopting a cloud first policy we have helped reduce the government procurement process for new technology from months to less than two weeks” said Mohammed Ali Al Qaed CEO of Bahrain iGA With cloud based technology as the focus for public ICT procurement the Bahraini government can exercise minimal upfront investment by paying only for the services it needs With tools for cost alloc ation and service provisioning the AWS Cloud offers built in resource discipline enabling governments to shift their focus toward advancing development goals Key Takeaway Technology driven innovation is one way public institutions can drive economic development With the right technology governments nonprof its economic development organizations and other entities can improve their internal operations become more productive and ultimately focus more acutely on serving citizens This can create co nditions in which citizens enjoy improved quality of life and where businesses flourish As organizations increasingly embrace cloud based solutions long lasting effects can be realized in the form of community wide collaboration partnerships with local businesses and increased innovation This can help these institutions wield greater influence on economic development Contributors The following individuals and organizations contributed to this document: • Carina Veksler Public Sector Solutions AWS Public Sector • Randi Larson Public Sector Content AWS Public Sector • John Brennan International Expansion AWS Public Sector • Mike Grella Economic Development AWS Public Policy
General
10_Considerations_for_a_Cloud_Procurement
Archived10 Considerations for a Cloud Procurement March 2017 This version has been archived For the most recent version of this paper see: https://docsawsamazoncom/whitepapers/latest/considerationsfor cloudprocurement/considerationsforcloudprocurementhtmlArchived© 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – 10 Considerations for a Cloud Procurement Page 1 Contents Purpose 2 Ten Procurement Considerations 2 1 Understand Why Cloud Computing is Different 2 2 Plan Early To Extract the Full Benefit of the Cloud 3 3 Avoid Overly Prescriptive Requirements 3 4 Separate Cloud Infrastructure (Unmanaged Services) from Managed Services 4 5 Incorporate a Utility Pricing Model 4 6 Leverage ThirdParty Accreditations for Security Privacy and Auditing 5 7 Understand That Security is a Shared Responsibility 6 8 Design and Implement Cloud Data Governance 6 9 Specify Commercial Item Terms 6 10 Define Cloud Evaluation Criteria 7 Conclusion 7 ArchivedAmazon Web Services – 10 Considerations for a Cloud Procurement Page 2 Purpose Amazon Web Services (AWS) offers scalable costefficient cloud services that public sector customers can use to meet mandates reduce costs drive efficiencies and accelerate innovation The procurement of an infrastructure as a service (IaaS) cloud is unlike traditional technology purchasing Traditional public sector procurement and contracting approaches that are designed to purchase products such as hardware and related software can be inconsistent with cloud services (like IaaS) A failure to modernize contracting and procurement approaches can reduce the pool of competitors and inhibit customer ability to adopt and leverage cloud technology Ten Procurement Considerations Cloud procurement presents an opportunity to reevaluate existing procurement strategies so you can create a flexible acquisition process that enables your public sector organization to extract the full benefits of the cloud The following procurement considerations are key components that can form the basis of a broader public sector cloud procurement strategy 1 Understand Why Cloud Computing is Different Hyperscale Cloud Service Providers (CSPs) offer commercial cloud services at massive scale and in the same way to all customers Customers tap into standardized commercial services on demand They pay only for what they use The standardized commercial delivery model of cloud computing is fundamentally different from the traditional model for onpremises IT purchases (which has a high degree of customization and might not be a commercial item) Understanding this difference can help you structure a more effective procurement model IaaS cloud services eliminate the customer ’s need to own physical assets There is an ongoing shift away from physical asset ownership toward ondemand utilitystyle infrastructure services Public sector entities should understand how standardized utilitystyle services are budgeted for procured and used and then build a cloud procurement strategy that is ArchivedAmazon Web Services – 10 Considerations for a Cloud Procurement Page 3 intentionally different from traditional IT —designed to harness the benefits of the cloud delivery model 2 Plan Early To Extract the Full Benefit of the Cloud A key element of a successful cloud strategy is the involvement of all key stakeholders (procurement legal budget/finance security IT and business leadership) at an early stage This involvement ensures that the stakeholders can understand how cloud adoption will influence existing practices It provides an opportunity to reset expectations for budgeting for IT risk management security controls and compliance Promoting a culture of innovation and educating staff on the benefits of the cloud and how to use cloud technology helps those with institutional knowledge understand the cloud It also helps to accelerate buyin during the cloud adoption journey 3 Avoid Overly Prescriptive Requirements Public sector stakeholders involved in cloud procurements should ask the right questions in order to solicit the best solutions I n a cloud model physical assets are not purchased so traditional data center procurement requirements are no longer relevant Continuing to recycle data center questions will inevitably lead to data center solutions which might result in CSPs being unable to bid or worse lead to poorly designed contracts that hinder public sector customers from leveraging the capabilities and benefits of the cloud Successful cloud procurement strategies focus on applicationlevel performancebased requirements that prioritize workloads and outcomes rather than dictating the underlying methods infrastructure or hardware used to achieve performance requirements Customers can leverage a CSP’s established best practices for data center operations because the CSP has the depth of expertise and experience in offering secure hyperscale Iaa S cloud services It is not necessary to dictate customized specifications for equipment operations and procedures (eg racks server types and distances between data centers) By leveraging commercial cloud industry standards and best practices (including industryrecognized accreditations and certifications) customers avoid placing unnecessary restrictions on the services they can use and ensure access to innovative and costeffective cloud solutions ArchivedAmazon Web Services – 10 Considerations for a Cloud Procurement Page 4 4 Separate Cloud Infrastructure (Unmanaged Services) from Managed Services There is a difference between procuring cloud infrastructure (IaaS) and procuring labor to utilize cloud infrastructure or managed services such as Software as a Service (SaaS) cloud Successful cloud procurements separate cloud infrastructure from “hands on keyboard” services and labor or other managed services purchases Cloud infrastructure and services such as labor for planning developing executing and maintaining cloud migrations and workloads can be provided by CSP partners (or other third parties) as one comprehensive solution However cloud infrastructure should be regarded as a separate “service” with distinct roles and responsibilities service level agreements (SLAs) and terms and conditions 5 Incorporate a Utility Pricing Model To realize the benefits of cloud computing you need to think beyond the commonly accepted approach of fixedprice contracting To contract for the cloud in a manner that accounts for fluctuating demand you need a contract that lets you pay for services as they are consumed CSP pricing should be:  Offered using a pay asyougo utility model where at the end of each month customers simply pay for their usage  Allowed the flexibility to fluctuate based on market pricing so that customers can take advantage of the dynamic and competitive nature of cloud pricing Allowing CSPs to offer pay asyougo pricing or flexible payper use pricing gives customers the opportunity to evaluate what the cost of the usage will be instead of having to guess their future needs and over procure CSPs should provide publicly available up todate pricing and tools that allow customers to evaluate their pricing such as the AWS Simple Monthly Calculator: http://awsamazoncom/calculator Additionally CSPs should provide customers with the tools to generate detailed and customizable billing reports t o meet business and compliance needs ArchivedAmazon Web Services – 10 Considerations for a Cloud Procurement Page 5 CSPs should also provide features that enable customers to analyze cloud usage and spending so that customers can build in alerts to notify them when they approach their usage thresholds and projected/budgeted spend Such alerts enable organizations to determine whether to reduce usage to avoid overages or prepare additional funding to cover costs that exceed their projected budget 6 Leverage ThirdParty Accreditations for Security Privacy and Auditing Leveraging industry best practices regarding security privacy and auditing provides assurance that effective physical and logical security controls are in place This prevents overly burdensome processes and duplicative approval workflows that are often unjustified by real risk and compliance needs There are many security frameworks best practices audit standards and standardized controls that cloud solicitations can cite such as the following:  Federal Risk and Authorization Management Program (FedRAMP)  Service Organization Controls (SOC) 1/American Institute of Certified Public Accountants (AICPA): AT 801 (formerly Statement on Standards for Attestation Engagements [SSAE] No 16)/International Standard on Assurance Engagements (ISAE) 3402 (formerly Statement on Auditing Standards [SAS] No 70) SOC 2 SOC 3  Payment Card Industry Data Security Standard (PCI DSS)  International Organization for Standardization (ISO) 27001 ISO 27017 ISO 27108 ISO 9001  Department of Defense (DoD) Security Requirements Guide (SRG)  Federal Information Security Management Act (FISMA)  International Traffic in Arms Regulations (ITAR)  Family Educational Rights and Privacy Act (FERPA)  Information Security Registered Assessors Program (IRAP) (Australia)  ITGrundschutz (Germany)  Federal Information Processing Standard (FIPS) 1402 ArchivedAmazon Web Services – 10 Considerations for a Cloud Procurement Page 6 7 Understand That Security is a Shared Responsibility As cloud computing customers are building systems on a cloud infrastructure the security and compliance responsibilities are shared between service providers and cloud consumers In an IaaS model customers control both how they architect and secure their applications and the data they put on the infrastructure CSPs are responsible for providing services through a highly secure and controlled infrastructure and for providing a wide array of additional security features The respective responsibilities of the CSP and the customer depend on the cloud deployment model that is used either IaaS SaaS or Platform as a Service (PaaS)Customers should clearly understand their security responsibilities in each cloud model 8 Design and Implement Cloud Data Governance Organizations should retain full control and ownership over their data and have the ability to choose the geographic locations in which to store their data with CSP identity and access controls available to restrict access to customer infrastructure and data Customers should clearly understand their responsibilities regarding how they store manage protect and encrypt their data A major benefit of cloud computing as compared to traditional IT infrastructure is that customers have the flexibility to avoid traditional vendor lock in Cloud customers are not buying physical assets and CSPs provide the ability to move up and down the IT stack as needed with greater portability and interoperability than the old IT paradigm Public sector entities should require that CSPs: 1) provide access to cloud portability tools and services that enable customers to move data on and off their cloud infrastructure as needed and 2) have no required minimum commitments or required longterm contracts 9 Specify Commercial Item Terms Cloud computing should be purchased as a commercial item and organizations should consider which terms and conditions are appropriate (and not appropriate) in this context A commercial item is recognized as an item that is of a type that has been sold leased licensed or otherwise offered for sale to the general public and generally performs the same for all users/customers both comme rcial and government IaaS CSP terms and conditions are designed to reflect how a cloud services model functions (ie physical assets are not being ArchivedAmazon Web Services – 10 Considerations for a Cloud Procurement Page 7 purchased and CSPs operate at massive scale to offer standardized commercial services) It is critical that a CSP’s terms and conditions are incorporated and utilized to the fullest extent 10 Define Cloud Evaluation Criteria Cloud evaluation criteria should focus on system performance requirements Select the appropriate CSP from an established resource pool to take advantage of the cloud’s elasticity cost efficiencies and rapid scalability This approach ensures that you get the best cloud services to meet your needs the best value in these services and the ability to take advantage of marketdriven innovation The National Institute of Standards and Technology (NIST) definitions of cloud benefits are an excellent starting point to use for determining cloud evaluation criteria: http://nvlpubsnistgov/nistpubs/Legacy/SP/nistspecialpublication800 146pdf Conclusion Thousands of public sector customers use AWS to quickly launch services using an efficient cloudcentric procurement process Keeping these ten steps in mind will help organizations deliver even greater citizen student and mission focused outcomes
General
Active_Directory_Domain_Services_on_AWS
This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlActive Di rectory Domain Services on AWS Design and Planning Guide November 20 2020 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlContents Importance of Active Directory in the cloud 1 Terminology and definitions 1 Shared responsibility model 3 Direct ory services options in AWS 4 AD Connector 4 AWS Managed Microsoft Active Directory 5 Active Directory on EC2 7 Comparison of Active Directory Services on AWS 7 Core infrastructure design on AWS for Windows Workloads and Directory Services 9 Planning AWS accounts and Organization 9 Network design considerations for AWS Managed Microsoft AD 9 Design consideration for AWS Managed Micro soft Active Directory 12 Single account AWS Region and VPC 12 Multiple accounts and VPCs in one AWS Region 13 Multiple AWS Regions deploymen t 14 Enable Multi Factor Authentication for AWS Managed Microsoft AD 16 Active Directory permissions delegation 17 Design considerations for running Active Directory on EC2 instances 18 Single Region deployment 18 Multi region/global deployment of self managed AD 20 Designing Active Directory sites and services topology 21 Security considerations 22 Trust relationships with on premises Active Directory 22 Multi factor authentication 24 AWS account security 24 Domain controller security 24 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlOther considerations 25 Conclusion 26 Contributors 26 Further Reading 27 Document Revisions 27 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAbstract Cloud is now the center of most enterprise IT strategies Many enterprises find that a wellplanned move to the cloud results in an immediate business payoff Active Directory is a foundation of the IT infrastructure for many large enterprises This whitepaper covers best practices for designing Active Directory Domain Services (AD DS) architecture in Amazon Web Services (AWS) including AWS Managed Microsoft AD Active Directo ry on Amazon Elastic Compute Cloud (Amazon EC2) instances and hybrid scenarios This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 1 Importance of Active Directory in the cloud Microsoft Active Directory was introduced in 1999 and became de facto standard technology for centralized management of Microsoft Windows computers and user authentications Active Directory serves as a distributed hierarchical data storage for information about corporate IT infrastructure including Domain Name System (DNS) zones and records devices and users user credentials and access rights based on groups membership Currently 95% of enterprises use Active Directory for authentication Successful adoption of cloud technology requires considering existing IT infr astructure and applications deployed on premises Reliable and secure Active Directory architecture is a critical IT infrastructure foundation for companies running Windows workloads Terminology and definitions AWS Managed Microsoft Active Directory AWS Directory Service for Microsoft Active Directory also known as AWS Managed Microsoft AD is Microsoft Windows Server Active Directory Domain Services (AD DS) deployed and managed by AWS for you The service runs on actual Windows Server for the highest po ssible fidelity and provides the most complete implementation of AD DS functionality of cloud managed AD DS services available today Active Directory Connector (AD Connector) is a directory gateway (proxy) that redirects directory requests from AWS applic ations and services to existing Microsoft Active Directory without caching any information in the cloud It does not require any trusts or synchronization of user accounts Active Directory Trust A trust relationship (also called a trust) is a logical rel ationship established between domains to allow authentication and authorization to shared resources The authentication process verifies the identity of the user The authorization process determines what the user is permitted to do on a computer system or network Active Directory Sites and Services In Active Directory a site represents a physical or logical entity that is defined on the domain controller Each site is associated with an Active Directory domain Each site also has IP definitions for what IP addresses and ranges belong to that site Domain controllers use site information to inform Active Directory clients about domain controllers present within the closest site to the client This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 2 Amazon V irtual Private Cloud ( Amazon VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define You have complete control over your virtual networking environment including the selection of your own private IP address ranges creation of subnets and configuration of route tables and network gateways You can also create a hardware Virtual Private Network (VPN) connection between your corporate data center and your VPC to leverage the AWS Cloud as an extension of your corporate data ce nter AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS Using AWS Direct Connect you can establish private connectivity between AWS and your data center office or colocation environment AWS Single Sign On (AWS SSO) is a cloud SSO service that makes it easy to centrally manage SSO access to multiple AWS accounts and business applications With AWS SSO you can easily manage SSO access and user permissions to all of your accounts in AWS Organi zations centrally AWS Transit Gateway is a service that enables customers to connect their VPCs and their on premises networks to a single gateway Domain controller (DC) – an Active Directory server that responds to authentication requests and store a re plica of Active Directory database Flexible Single Master Operation (FSMO) roles In Active Directory some critical updates are performed by a designated domain controller with a specific role and then replicated to all other DCs Active Directory uses r oles that are assigned to DCs for these special tasks Refer to the Microsoft documentation web site for more information on FSMO roles Global Catalog A glob al catalog server is a domain controller that stores partial copies of all Active Directory objects in the forest It stores a complete copy of all objects in the directory of your domain and a partial copy of all objects of all other forest domains Read Only Domain Controller (RODC ) Read only domain controllers (RODCs) hold a copy of the AD DS database and respond to authentication requests but applications or other servers cannot write to them RODCs are typically deployed in locations where physical s ecurity cannot be provided VPC Peering A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 or IPv6 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 3 addresses Instances in either VPC can communicate with each other as if they are within the same network Shared responsibility model When operating in the AWS Cloud Security and Compliance is a shared responsibility between AWS and the custome r AWS is responsible for security “of” the cloud whereas customers are responsible for security “in” the cloud Figure 1 Shared Responsibility Model when operating in AWS Cloud AWS is responsible for securing its software hardware and the facilities where AWS services are located including securing its computing storage networking and database services In addition A WS is responsible for the security configuration of AWS Managed Services like Amazon DynamoDB Amazon Relational Database Service (Amazon RDS) Amazon Redshift Amazon EMR Amazon WorkSpaces and so on Customers are responsible for implementing appropria te access control policies using AWS Identity and Access Management ( IAM) configuring AWS Security Groups (Firewall) to prevent unauthorized access to ports and enabling AWS CloudTrail Customers are also responsible for enforcing appropriate data loss p revention policies to ensure compliance with internal and external policies as well as detecting and This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 4 remediating threats arising from stolen account credentials or malicious or accidental misuse of AWS If you decide to run your own Active Directory on Am azon EC2 instances you have full administrative control of the operating system and the A ctive Directory environment You can set up custom configurations and create a complex hybrid deployment topology However you must operate and support it in the sam e manner as you do with onpremises Active Directory If you use AWS Managed Microsoft AD AWS provides instance deployment in one or multiple regions operational management of your directory monitoring backup patching and recovery services You confi gure the service and perform administrative management of users groups computers and policies AWS Managed Microsoft AD has been audited and approved for use in deployments that require Federal Risk and Authorization Management (FedRAMP) Payment Card Industry Data Security Standard (PCI DSS) US Health Insurance Portability and Accountability Act (HIPAA) or Service Organizational Control (SOC) compliance When used with compliance requirements it is your responsibility to configure the directory password policies and ensure that the entire application and infrastructure deployment meets your compliance requirements For more information see Manag e Compliance for AWS Managed Microsoft AD Directory services options in AWS AWS provides a comprehensive set of services and tools for deploying Microsoft Windows workloads on its rel iable and secure cloud infrastructure AWS Active Directory Connector (AD Connector) and AWS Managed Microsoft AD are fully managed services that allow you to connect AWS applications to an existing Active Directory or host a new Active Directory in the cl oud Together with the ability to deploy selfmanaged Active Directory in Amazon EC2 instances these services cover all cloud and hybrid scenarios for enterprise identity services AD Connector AD Connector can be used in the following scenarios: • Sign in to AWS applications such as Amazon Chime Amazon WorkDocs Amazon WorkMail or Amazon WorkSpaces using corporate credentials (See the list of compatible applications on the AWS Documentation site) This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 5 • Enable Access to the AWS Management Console with AD Crede ntials For large enterprises AWS recommends us ing AWS Single Sign On • Enable multi factor authentication by integrating with your existing RADIUS based MFA infrastructure • Join Windows EC2 instances to your on premises Active Directory Note: Amazon RDS for SQL Server and Amazon FSx for Windows File Server are not compatible with AD Connector Amazon RDS for SQL Server compatible with AWS Managed Microsoft AD only Amazon FSx for Windows File Server can be deployed with AWS Managed Microsoft AD or self managed Active Directory AWS Managed Microsoft Active Directory AWS Directory Service lets you run Microsoft Active Directory as a managed service By default each AWS Managed Microsoft AD has a minimum of two domain controllers each deployed in a separate Availability Zone (AZ) for resiliency and fault tolerance All domain controllers are exclusively yours with nothing shared with any oth er AWS customer AWS provides operational management to monitor update backup and recover domain controller instances You administer users groups computer and group policies using standard Active Directory tools from a Windows computer joined to the AWS Managed Microsoft AD domain AWS Managed Microsoft AD preserves the Windows single sign on (SSO) experience for users who access AD DS integrated applications in a hybrid IT environment With AD DS trust support your users can sign in once on premises and access Windows workloads runnin g onpremises and in the cloud You can optionally expand the scale of the directory by adding domain controllers thereby enabling you to distribute requests to meet your performance requirements You can also share the directory with any account and VPC Multi Region replication can be used to automatically replicate your AWS Managed Microsoft AD directory data across multiple Regions so you can improve performance for users and applications in disperse geographic locations AWS Managed Microsoft AD uses native AD replication to replicate your directory’s data securely to the new Region Multi Region replication is only supported for the Enterprise Edition of AWS Managed Microsoft AD AWS Managed Microsoft AD enables you to forward all domain controller’s Windows Security event log to Amazon CloudWatch giving you the ability to monitor your use of the directory and any administrative intervention performed in the course of AWS This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 6 operating the service It is also approved for applications in the AWS Cloud tha t are subject to compliance by the US Health Insurance Portability and Accountability Act (HIPAA) Payment C ard Industry Data Security Standard (PCI DSS) Federal Risk and Authorization Management (FedRAMP) or Service Organizational Control (SOC) when you enable compliance for your directory You can also tailor security with features that enable you to manage password policies and enable secure LDAP communications through Secure Socket Layer (SSL)/Transport Layer Security (TLS) You can also enable multi factor authentication (MFA) for AWS Managed Micros oft AD This authentication provides an additional layer of security when users access AWS applications from the internet such as Amazon WorkSpaces or Amazon QuickSight AWS Managed Microsoft AD enables you to extend your schema and perform LDAP write operations These features combined with advanced security features such as Kerberos Constrained Deleg ation and Group Managed Service Account provide the greatest degree of compatibility for Active Directory aware applications like Microsoft SharePoint Microsoft SQL Server Always On Availability Groups and many NET applications Because Active Directo ry is an LDAP directory you can also use AWS Managed Microsoft AD for Linux Secure Shell (SSH) authentication and other LDAP enabled applications The full list of supported AWS applications is available on the AWS Documentation site AWS Managed Microsoft AD runs actual Window Server 2012 R2 Active Directory Domain Services and operates at the 2012 R2 functional level AWS Managed Microsoft AD is available in two editions: Standard and Enterprise These editions have different storage capacity ; Enterprise Edition also has multi region features Edition Storage capacity Approximate number of objects that can be stored* Approximate number of users in domain* Standard 1 GB ~30000 Up to ~5000 users Enterprise 17 GB ~500000 Over 5000 users * The number of objects varies based on type of objects schema extensions number of attributes and data stored in attributes This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 7 Note: AWS Domain Administrators have full administrative access to all domains hosted on AWS See your agreement with AWS and the AWS Data Privacy FAQ for more information about how AWS handles content that you store on AWS systems including directory informat ion You do not have Domain or Enterprise Admin permissions and rely on delegated groups for administration AWS Managed Microsoft AD can be used for following scenarios: managing access to AWS Management Console and cloud services joining EC2 Windows ins tances to Active Directory deploying Amazon RDS databases with Windows authentication using FSx for Windows File Services and signing in to productivity tools like Amazon Chime and Amazon WorkSpaces For more information on this solution see Design consideration for AWS Managed Microsoft Active Directory in this document Active Directory on EC2 If you prefer to extend your Active Directory to AWS and manage it yourself for flexibility or other reasons you h ave the option of running Active Directory on EC2 For more information s ee Design considerations for running Active Directory on EC2 instances in this document Comparison of Active Directory Services on AWS The following table compares the features and functions between various Directory Services options available on AWS Many features are not applicable directly to AWS AD Connector because it is actins only as a proxy to the existing Active Directory domain Function AWS AD Connector AWS Managed Microsoft AD Active Directory on EC2 Managed service yes yes no Multi Region deployment n/a yes Enterprise Edition yes Share directory with multiple accounts no yes no Supported by AWS applications (Amazon Chime Amazon WorkSpaces AWS Single Sign On & etc) yes yes yes (through federation or AD Connector) This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 8 Function AWS AD Connector AWS Managed Microsoft AD Active Directory on EC2 Supported by RDS (SQL Server Oracle MySQL PostgreSQL and MariaDB) n/a yes no Supported by FSx for Windows File Server n/a yes yes Creating users and groups yes yes yes Joining computers to the domain yes yes yes Create trusts with existing Active Directory domains and forests n/a yes yes Seamless domain join for Windows and Linus EC2 instances yes yes yes with AWS AD Connector Schema extensions n/a yes yes Add domain controllers n/a yes yes Group Managed Service Accounts n/a yes Depends on the Windows Server version Kerberos constrained delegation n/a yes yes Support Microsoft Enterprise CA n/a yes yes Multi Factor Authentication yes through RADIUS yes through RADIUS yes with AD Connector Group policy n/a yes yes Active Directory Recycle bin n/a yes yes PowerShell support n/a yes yes This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 9 Core infrastructure design on AWS for Windows Workloads and Directory Services Planning AWS accounts and Organization AWS Organizations helps you centrally manage your AWS accounts identity services and access policies for your workloads on AWS Whether you a re a growing startup or a large enterprise Organizations helps you to centrally manage billing; control access compliance and security; and share resources across your AWS accounts For more information refer to the AWS Organizations User Guide With AWS Organization s you can centrally define critical resources and make them available to accounts across your organization For example you can authenticate against your central identity store and enable applications deployed in other accounts to access it If your users need to manage AWS services and access AWS applications with their Active Directory credentials we recommend integrating your identity servi ce with the management account in AWS Organization s • Deploy AWS Managed AD in the management account with trust to your on premises A ctive Directory to allow users from any trusted domain to access AWS Applications Share AWS Managed AD to other accounts across your organization • Deploy AWS Single Sign On in the management account to centrally manage access to multiple AWS accounts and business applic ations and provide users with single sign on access to all their assigned accounts and applications from one place AWS SSO also includes built in integrations to many business applications such as Salesforce Box and Microsoft Office 365 Network design considerations for AWS Managed Microsoft AD Network design for Microsoft workloads and directory services consist s of network connectivity and DNS names resolution This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 10 To plan the network topology for your organization refer to the whitepaper Building a Scalable and Secure Multi VPC AWS Network Infrastructure and consider the following recommendations: • Plan your IP networks for Microsoft workloads without overlapping address spaces Microsoft does not recommend using Active Directory over NAT • Place directory services into a centralized VPC that is reachable from any other VPC with workloads depending on Active Directory • By default instances that you launch into a VPC cannot communicate with your onpremises network To extend your existing AD DS i nto the AWS Cloud you must connect your on premises network to the VPC in one of two ways: by using Virtual Private Network (VPN) tunnels or by using AWS Direct Connect To connect multiple VPCs in AWS you can use VPC peering or AWS Transit Gateway Network port requirements and security groups Active Directory requires certain network ports to be open to allow traffic for LDAP AD DS replication user authentication Windows Time services Distributed File System (DFS) and many more When you deploy Active Directory on EC2 instances using the AWS Quick Start or AWS Managed Microsoft AD it automatically creates a new security group with all required por t rules If you manually deploy your Active Directory you need to create a security group and configure rules for all required network protocols For a complete list of ports see Active Directory and Active Directory Domain Services Port Requirements in the Microsoft TechNet Library DNS names resolution Active Directory heavily relies on DNS services and hosts its own DNS services on domain controllers To establish seamless name resolution in all your VPCs and your onpremises network create a Route 53 Resolver deploy inbound/ outbound endpoints in your VPC and configure conditional forwarders to all of your Active Directory domai ns (including AWS Managed AD and on premises A ctive Directory ) in the Route 53 Resolver Share centralized Route 53 Resolver endpoints across all VPC in your organization Create conditional forwarders on your on premises DNS servers for all Route 53 DNS This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 11 zones and DNS zones on AWS Managed AD and point them to Route 53 Resolver Endpoints Figure 2 Route 53 Resolver configuration for hybrid network Here are design considerations for DNS resolution : • Make all Active Directory DNS domain s resolvable for all clients because they are using it to locate Active Directory services and register their DNS names using dynamic updates • Try to keep the DNS name resolution local to the AWS Region to reduce latency • Use Amazon DNS Server (2 resolve r) as a forwarder for all other DNS domains that are not authoritative on your DNS Servers on A ctive Directory domain controllers This setup allows your DCs to recursively resolve records in Amazon Route 53 private zone and use Route 53 Resolver condition al forwarders • Use Route 53 Resolver Endpoints to create DNS resolution hub and manage DNS traffic by creating conditional forwarders For more information on designing a DNS name resolution strategy in a hybrid scenario see the Amazon Route 53 Resolver for Hybrid Clouds blog post This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 12 Note: The Amazon EC2 instance limits the numbe r of packets that can be sent to the Amazon provided DNS server to a maximum of 1024 packets per second per network interface This limit cannot be increased If you run into this performance limit you must set up conditional forwarding for Amazon Route 5 3 private zones to use the Amazon DNS Server (2 resolver) and use root hints for internet name resolution This setup reduces the chances of you exceeding the 1024 packet limit on AWS DNS resolver Design consideration for AWS Managed Microsoft Active Dir ectory Active Directory depends on the network and accounts design Before you select the right Active Directory topology you must choose your network and organizational design Although there is no one sizefitsall answer for how many AWS accounts a par ticular customer should have most companies create more than one AWS account as multiple accounts provide the highest level of resource and billing isolation in the following cases: • The business requires strong fiscal and budgetary billing isolation betw een specific workloads business units or cost centers • The business requires administrative isolation between workloads • The business requires a particular workload to operate within specific AWS service limits and not impact the limits of another workload • The business’s workloads depend on specific instance reservations to support high availability (HA) or disaster recovery (DR) capacity requirements Single account AWS Region and VPC The simplest case is when you need to deploy a new solution i n the cloud from scratch You can deploy AWS Managed Microsoft AD in minutes and use it for most of the services and applications that require Active Directory This solution is ideal for scenarios with no additional requirements for logical isolation betw een application tiers or administrat ors This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 13 Figure 3 Managed A ctive Directory architecture deployed by Quick Start Multiple accounts and VPCs in one AWS Region Large organizations use multiple AWS accounts for administrative delegation and billing purpose s You can share a single AWS Managed Microsoft AD with multiple AWS accounts within one AWS Region This capability makes it easier and more cost effective for you to manage directory aware workloads from a single directory across accounts and VPCs This option also allows you seamless ly join your Amazon EC2 Windows instances to AWS Managed Microsoft AD This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 14 Figure 4 Sharing single AWS Managed Microsoft AD with another account AWS recommends that you create a separate account for identity services like Active Directory and only allow a very limited group of administrators to have access to this account Generally you should treat Active Directory in the cloud in the same manner as on premises A ctive Directory Just as you would limit access to a physica l data center make sure to limit administrative access to the AWS account control Create additional AWS accounts as necessary in your organization and share the AWS Managed Microsoft AD with them After you have shared the service and configured routing these users can use A ctive Directory to join EC2 Windows instances but you maintain control of all administrative tasks Deploy AWS Managed AD in your management account of AWS Organization s This allow s you to use Managed AD for authentication with AWS Identity and Access Management (IAM) to access the AWS Management Console and other AWS applications using your Active Directory credentials Multiple AWS Regions deployment AWS Managed Microsoft AD Enterprise Edition supports Multi Region deployment You can use automated multi Region replication in all Regions where AWS Managed Microsoft AD is available AWS services such as Amazon RDS for SQL Server and Amazon FSx connect to the local instances of the global directory This allows your users to sign in once to AD aware applications running in AWS as well as AWS services like Amazon RDS for SQL Server in a ny AWS Region – using credentials from AWS Managed Microsoft AD or a This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 15 trusted AD domain or forest Refer to AWS Directory Service documentation for the current list of AWS Services supporting Mu ltiRegion replication feature With multi Region replication in AWS Managed Microsoft AD AD aware applications such as SharePoint SQL Server Always On AWS services such as Amazon RDS for SQL Server and Amazon FSx for Windows File Server use the dire ctory locally for high performance and are multi Region for high resiliency The following list comprises additional benefits of Multi Region replication • It enables you to deploy a single AWS Managed Microsoft AD instance globally quickly and eliminates the heavy lifting of self managing a global AD infrastructure • Optimal performance for workloads deployed in multiple regions • Multi Region resiliency AWS Managed Microsoft AD handles automated software updates monitoring recovery and the security of the underlying AD infrastructure across all Regions • Disaster recovery In the event that all domain controllers in one Region are down AWS Managed Microsoft AD recovers the domain controllers and replicates the directory data automatically Meanwhile do main controllers in other Regions are up and running To deploy AWS Managed Microsoft AD across multiple Regions you must create it in Primary region and after that a dd one or more Replicated regions Consider following factors for your Active Directory d esign: • When you deploy a new Region AWS Managed Microsoft AD creates two domain controllers in the selected VPC in the new Region You can add more domains controllers later for scalability • AWS Managed Microsoft AD uses a backend network for replication and communications between domain controllers • AWS Managed Microsoft AD creates a new Active Directory Site and names it the same name of the Region For example us east1 You can also rename this later using the Active Directory Sites & Services tool • AWS Managed AD is configured to use change notifications for inter site replications to eliminate replication delays This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 16 After you add your new Region you can do any of the following tasks : • Add more domain controllers to the new Region for horizontal scala bility • Share your directory with more AWS accounts per Region Directory sharing configurations are not replicated from the primary Region and you may have different sharing configuration in different region based on your security requirements • Enable log forwarding to retrieve your directory’s security logs using Amazon CloudWatch Logs from the new Region When you enable log forwarding you must provide a log group name in each Region where you replicated your directory • Enable Amazon Simple Notification Service (Amazon SNS) monitoring for the new Region to track your directory health status per Region Enable Multi Factor Authentication for AWS Managed Microsoft AD You can enable multi factor authentication (MFA) for your AWS Managed Microsoft AD to increase security when your users specify their A ctive Directory credentials to access supported Amazon enterprise applications When you enable MFA your users enter their user name and password (first factor) and then e nter an authentication code (second factor) that they obtain from your virtual or hardware MFA solution These factors together provide additional security by preventing access to your Amazon enterprise applications unless users supply valid user credenti als and a valid MFA code To enable MFA you must have an MFA solution that is a remote authentication dial in user service (RADIUS) server or you must have an MFA plugin to a RADIUS server already implemented in your on premises infrastructure Your MFA solution should implement onetime passcodes (OTP) that users obtain from a hardware device or from software running on a device (such as a mobile phone) This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 17 Figure 6 Using AWS Managed Microsoft Active Directory with MFA for access to Amazon Work Spaces A more detailed description of this solution is available on the AWS Security Blog Active Directory permissions delegation When you use AWS Managed Microsoft AD AWS assumes responsibility for some of the service level tasks so that you may focus on other business critical tasks The following service level tasks are a utomatically performed by AWS • Taking snapshots of the Directory Service and providing the ability to recover data • Creating trusts by administrator request • Extending Active Directory schema by administrator request • Managing Active Directory forest config uration • Managing monitoring and updating domain controllers • Managing and monitoring DNS service for Active Directory • Managing and monitoring Active Directory replication • Managing Active Directory sites and networks configuration This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 18 With AWS Managed Microsoft AD you also may delegate administrative permissions to some groups in your organization These permissions include managing user accounts joining computers to the domain managing group policies and password policies managing DNS DHCP DFS RAS CA and other services The full list of permissions that can be delegated is described in the AWS Directory Service Administration Guide Work with all teams that are using Active Directory services in your organization and create a li st with all of the permissions that must be delegated Plan security groups for different administrative roles and use AWS Managed Microsoft AD delegated groups to assign permissions Check the AWS Directory Service Administration Guide to make sure that it is possible to delegate all of the required permissions Design considerations for running Active Directory on EC2 instances If you cannot use AWS Managed Microsoft AD and you have Windows workloads you want to deploy on AWS you can still run Active Directory on EC2 instances in AWS Depending on the number of Regions where you are deploying your solution your Active Directory design may slightly differ The following section provides a deployment guide and recommendation on how you can deploy Active Directory on EC2 instances in AWS Single Region deployment This deployment scenario is applicable if you are operating in a singl e Region or you do not need Active Directory to be in more than a single Region The deployment options or architecture patterns are not significantly different whether you are operating in a single VPC or multiple VPCs If you are using multiple VPCs you must ensure that network connectivity between the VPCs is available through VPC peering VPN or AWS Transit Gateway The following diagrams depict how Active Directory can be deployed in a single Region in a single VPC or multiple VPCs This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Activ e Directory Domain Services on AWS 19 Figure 7 Deplo ying Active Directory on EC2 instances in a single Region for single VPC Figure 8 Deploying Active Directory on EC2 instances in a single Region for multiple VPCs Consider the following points when deploying Active Directory in this architecture: • We recommend deploying at least two domain controllers (DCs) in a Region These domain controllers should be placed in different AZs for availability reasons • DCs and other non internet facing servers should be placed in private subnets • If you require additional DCs due to performance you can add more DCs to existing AZs or deploy to another available AZ This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 20 • Configure VPCs in a Region as a single A ctive Directory site and define A ctive Directory subnets accordingly This configuration ensures that all of your clients correctly select the closest available DC • If you have multiple VPCs you can centralize the Active Directory services in one of the existing VPCs or create a shared services VPC to centralize the domain controllers • You must ensure you have highly available network connectivity between VPCs such as VPC peering If you are connecting the VPCs using VPNs or other methods ensure connectivity is highly available • If you want to use your self managed Active Directory credentials to acc ess AWS Services or thirdparty services you can integrate your self managed AD with AWS IAM and AWS Single Sign On using AWS AD Connector or AWS Managed AD through a trust relationship In these cases AD Connector or AWS Managed AD must be deployed in t he management account of your organization Multi region/global deployment of self managed AD If you are operating in more than one Region and require Active Directory to be available in these Regions use the multi region/global deployment scenario Withi n each of the Regions use the guidelines for single Region deployment as all of the single Region best practices still apply The following diagrams depict how Active Directory can be deployed in multiple Regions In this example we are showing Active Di rectory deployed in three Regions that are interconnected to each other using cross Region VPC peering In addition these Regions are also connected to the corporate network using AWS Direct Connect and VPN This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 21 Figure 9 Deploying Active Directory on EC2 instances in multiple Regions with multiple VPCs Consider the following recommendations when deploying Active Directory in this architecture : • Deploy a t least two domain controllers in each Region These domain controllers should be placed in different AZs for availability reasons • Configure VPCs in a reg ion as a single A ctive Directory site and define A ctive Directory subnets accordingly This configuration ensures all of your clients will correctly select the closest available domain controller • Ensure robust inter Region connectivity exists between all of the Regions Within AWS you can leverage cross Region VPC peering to achieve highly available private connectivity between the Regions You can also leverage the Transit VPC solution to interconnect multiple regions Designing A ctive Directory sites an d services topology It’s important to define A ctive Directory sites and subnets correctly to avoid clients from using domain controllers that are located far away as this would cause increased latency See How Domain Controllers are Located in Windows This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Ser vices on AWS 22 Follow these best practices for configuring sites and services: • Configure one A ctive Directory site per AWS Region If you are operating in multiple AWS Regions we recommend configuring one A ctive Directory site for each of these Regions • Define the entire VPC as a subnet and assign it to the A ctive Directory site defined for this Region • If you have multiple VPCs in the same Region define each of these VPCs as separate subnets and assign it to the single A ctive Directory site set up for this Region This setup allows you to use domain controllers in that Region to service all clients in that region • If you have enabled IPv6 in your Amazon VPC create the necessary IPv6 subnet definition and assign it to this A ctive Directory site • Define all IP address ranges If clients exist in undefined IP address ranges the clients might not be associated with the correct A ctive Directory site • If you have reliable high speed connectivity between all of the sites you can use a single site link for all of your AD sites and maintain a single replication configuration • Use consistent sites names in all AD forests connected by trusts Security considerations Trust relationships with on premises A ctive Directory Whether you are deploying Active Directory on EC2 instances or using AWS Managed Microsoft AD these are the three common deployment patterns seen on AWS 1 Deploy a standal one forest/domain on AWS with no trust In this model you set up a new forest and domain on AWS which is different and separate from the current Active Directory that is running on premises In this deployment both accounts (user credentials service acc ounts) and resources (computer objects) reside in Active Directory running on AWS and most or all of the member servers run on AWS in single or multiple Regions For this deployment there is no network connectivity requirement between on premises and AWS for the purposes of Active Directory as nothing is shared between the two A ctive Directory forests This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 23 2 Deploy a new forest/domain on AWS with one way trust If you are planning on leveraging credentials from an on premises A ctive Directory on AWS member serve rs you must establish at least a one way trust to the Active Directory running on AWS In this model the AWS domain becomes the resource domain where computer objects are located and on premises domain becomes the account domain Note: You must have robust connectivity between your data center and AWS A connectivity issue can break the authentication and make the whole solution not accessible for users Consider to extend your Active Directory domains to AWS to eliminate dependency on connectivity with onpremises infrastructure or deploy a multi path AWS Direct Connect or VPN connection 3 Extend your existing domain to AWS In this model you extend your existing Active Directory deployment from on premises to AWS which means adding additional domain controllers (running on Amazon EC2) to your existing domain and placing them in multiple AZs within your Amazon VPC If you are operating in multiple Regions add domain controllers in each of these Regions This deployment is easy flexible and provides the following advantages: o You are not required to set up additional trusts o DCs in AWS are handling both accounts and resources o More resilient to network connectivity issues o You can seamlessly set up and use AWS Cloud in a hybrid scenario with least impact to the applications (Note that network connectivity is required between your data center and AWS for initial and on going replication of data between the domain controllers) When you use cross forest trust relationships in Active Direct ory you need to use consistent Active Directory site names in both forests to have optimal performance Refer to the article Domain Locator Across a Forest Trust for more information See How Domain and Forest Trusts Work on the Microsoft Doc umentation website for more information This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 24 Multifactor authentication Multi factor authentication (MFA) is a simple best practice that adds an extra layer of protection on top of your user name and password With MFA enabled when users sign in to the AWS Management Console they are prompted for thei r user name and password (the first factor —what they “know ”) then prompted for an authentication response from their AWS MFA device (the second factor —what they “have ”) Taken together these multiple factors provide increased security for your AWS accoun t settings and resources We recommend enabling MFA on all of your privileged accounts regardless of whether you are using IAM or federating through SSO AWS account security Since you are running your domain controllers on Amazon EC2 securing your AWS account is an important process in securing your Active Directory domain Follow these recommendations to make sure your AWS account is secure • Enable MFA and then lock away your AWS root user credential • Use IAM groups to manage permission if you are using IAM users • Grant least privilege to all your users within AWS • Enable MFA for all privileged users • Use EC2 roles for applications that run on EC2 instances • Do not share access keys • Rotate credentials regularly • Turn on and analyze log files in AWS CloudTrail VPC Flow Logs and Amazon S3 bucket logs • Turn on encryption for data at rest and in transit where necessary Domain controller security Domain controllers provide the physical storage for the AD DS database i n addition to providing the services and data that allow enterprises to effectively manage their servers workstations users and applications If privileged access to a domain controller is obtained by a malicious user that user can modify corrupt or destroy the AD DS database and by extension all of the systems and accounts that are managed This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 25 by Active Directory Make sure your domain controller is secure to avoid compromising your Active Directory data The following points are some of the best pract ices to secure domain controllers running on AWS: • Secure the AWS account where the domain controllers are running by following least privilege and role based access control • Ensure unauthorized users don’t have access in your AWS account to create/access A mazon Elastic Block Store (Amazon EBS) snapshots launch or terminate EC2 Instances or create/copy EBS volumes • Ensure you are deploying your domain controllers in a private subnet without internet access Ensure that subnets where domain controllers are running don’t have a route to a NAT gateway or other device that would provide outbound internet access • Keep your security patches up todate on your domain controllers We recommend you first test the security patches in a non production environment • Restrict ports and protocols that are allowed into the domain controllers by using security groups Allow remote management like remote desktop protocol (RDP) only from trusted networks • Leverage the Amazon EBS encryption feature to encrypt the root and addit ional volumes of your domain controllers and use AWS Key Management Service (AWS KMS) for key management • Follow Microsoft recommended security configuration baselines and Best Practices for Securing Active Directory Other considerations FSMO Roles You can follow the same recommendation you would follow for your on premises deployment to determine FSMO roles on DCs See also best practices from Microsoft In the case of AWS Managed Microsoft AD all domain controllers and FSMO roles assignments are managed by AWS and do not require you to manage or change them Global Catalog Unless you have slow connections or an extremely large A ctive Directory database w e recommend adding global catalog role to all of your domain This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 26 controllers in multi domain forests (except the domain controller with the Infrastructure Master role) If you are hosting Microsoft Exchange in AWS Cloud at least one global catalog server is required in a site with Exchange servers For more information about global catalog see Microsoft documentation Since there is only one domain in the forest for AWS Managed Microsoft AD all domain controllers are configured as global catalog and will have full informatio n about all objects Read Only Domain Controllers (RODC) It’s possible to deploy RODC on AWS if you are running A ctive Directory on EC2 instances and require it and there are no special considerations for doing so AWS Managed Microsoft AD does not suppo rt RODCs All of the domain controllers that are deployed as a part of AWS Managed Microsoft AD are writable domain controllers Conclusion AWS provides several options for deploying and managing Active Directory Domain Services in the cloud and hybrid env ironments You can leverage AWS Managed Microsoft AD if you no longer want to focus on heavy lifting like managing the availability of the domain controllers patching backups and so on Or you can run Active Directory on EC2 instances if you need to have full administrative control on your Active directory In this whitepaper we have discussed these two main approaches of deploying Active Directory on AWS and have provided you with guidance and consideration for each of the de sign Depending on our deployment pattern scale requirements and SLA you may select one of these options to support your Windows workloads on AWS Contributors Contributors to this document include : • Vladimir Provorov Senior Solutions Architect Amazon Web Services • Vinod Madabushi Enterprise Solutions Architect Amazon Web Services This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 27 Further Reading For additional information see: • AWS Whitepapers • AWS Directory Service • Microsoft Workloads on AWS • Active Directory Domain Services on the AWS Cloud: Quick Start Reference Deployment • AWS Documentation Document Revisions Date Descript ion November 2020 AWS Managed Microsoft AD multi region feature update August 2020 Numerous updates throughout December 2018 First publication
General
Amazon_EC2_Reserved_Instances_and_Other_Reservation_Models
Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Amazon EC2 Reserved Instances and Other AWS Reservation Models: AWS Whitepaper Copyright © Amazon Web Services Inc and/or its affiliates All rights reserved Amazon's trademarks and trade dress may not be used in connection with any product or service that is not Amazon's in any manner that is likely to cause confusion among customers or in any manner that disparages or discredits Amazon All other trademarks not owned by Amazon are the property of their respective owners who may or may not be affiliated with connected to or sponsored by AmazonAmazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Table of Contents Abstract1 Abstract1 Introduction2 Amazon EC2 Reserved Instances3 Reserved Instances payment options3 Standard vs Convertible offering classes3 Regional and zonal Reserved Instances4 Differences between regional and zonal Reserved Instances4 Limitations for instance size flexibility5 Maximizing Utilization with Size Flexibility in Regional Reserved Instances5 Normalization factor for dedicated EC2 instances7 Normalization factor for bare metal instances7 Savings Plans9 Reservation models for other AWS services10 Amazon RDS reserved DB instances10 Amazon ElastiCache reserved nodes10 Amazon Elasticsearch Service Reserved Instances10 Amazon Redshift reserved nodes11 Amazon DynamoDB reservations11 Reserved Instances billing12 Usage billing 12 Consolidated billing 13 Reserved Instances: Capacity reservations13 Blended rates 14 How discounts are applied14 Maximizing the value of reservations15 Measure success15 Maximize discounts by standardizing instance type15 Reservation management techniques16 Reserved Instance Marketplace16 AWS Cost Explorer16 AWS Cost and Usage Report17 Reserved Instances on your cost and usage report17 AWS Trusted Advisor18 Conclusion 19 Contributors 20 Document revisions21 Notices22 iiiAmazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Abstract Amazon EC2 Reserved Instances and Other AWS Reservation Models Publication date: March 29 2021 (Document revisions (p 21)) Abstract This document is part of a series of AWS whitepapers designed to support your cloud journey and discusses Amazon EC2 Reserved Instances and reservation models for other AWS services Its aim is to empower you to maximize the value of your investments improve forecasting accuracy and cost predictability create a culture of ownership and cost transparency and continuously measure your optimization status 1Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Introduction The cloud is well suited for variable workloads and rapid deployment yet many cloudbased workloads follow a more predictable pattern For such applications your organization can achieve significant cost savings by using Amazon Elastic Compute Cloud (Amazon EC2) Reserved Instances Amazon EC2 Reserved Instances enable your organization to commit to usage parameters at the time of purchase to achieve a lower hourly rate Reservation models are also available for Amazon Relational Database Service (Amazon RDS) Amazon ElastiCache Amazon Elasticsearch Service (Amazon ES) Amazon Redshift and Amazon DynamoDB This whitepaper discusses Amazon EC2 Reserved Instances and the reservation models for these other AWS services 2Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Reserved Instances payment options Amazon EC2 Reserved Instances When you purchase Reserved Instances you make a oneyear or threeyear commitment and receive a billing discount of up to 72 percent in return When used for the appropriate workloads Reserved Instances can save you a lot of money Note that a Reserved Instance is not an instance dedicated to your organization It is a billing discount applied to the use of OnDemand Instances in your account These OnDemand Instances must match certain attributes of the Reserved Instances you purchased to benefit from the billing discount You pay for the entire term of a Reserved Instance regardless of actual usage so your cost savings are closely tied to use Therefore it is important to plan and monitor your usage to make the most of your investment When you purchase a Reserved Instance in a specific Availability Zone it provides a capacity reservation This improves the likelihood that the compute capacity you need is available in a specific Availability Zone when you need it A Reserved Instance purchased for an AWS Region does not provide capacity reservation Reserved Instances payment options You can purchase Reserved Instances through the AWS Management Console The following payment options are available for most Reserved Instances: •No Upfront – No upfront payment is required You are billed a discounted hourly rate for every hour within the term regardless of whether the Reserved Instance is being used No Upfront Reserved Instances are based on a contractual obligation to pay monthly for the entire term of the reservation A successful billing history is required before you can purchase No Upfront Reserved Instances •Partial Upfront – A portion of the cost must be paid up front and the remaining hours in the term are billed at a discounted hourly rate regardless of whether you’re using the Reserved Instance •All Upfront – Full payment is made at the start of the term with no other costs or additional hourly charges incurred for the remainder of the term regardless of hours used Reserved Instances with a higher upfront payment provide greater discounts You can also find Reserved Instances offered by thirdparty sellers at lower prices and shorter terms on the Reserved Instance Marketplace As you purchase more Reserved Instances volume discounts begin to apply that let you save even more For more information see Amazon EC2 Reserved Instance Pricing Standard vs Convertible offering classes When you purchase a Reserved Instance you can choose between a Standard or Convertible offering class Table 1 – Comparison of standard and Convertible Reserved Instances Standard Reserved Instance Convertible Reserved Instance Oneyear to threeyear term Oneyear to threeyear term 3Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Regional and zonal Reserved Instances Standard Reserved Instance Convertible Reserved Instance Enables you to modify Availability Zone scope networking type and instance size (within the same instance type) of your Reserved Instance For more information see Modifying Reserved InstancesEnables you to exchange one or more Convertible Reserved Instances for another Convertible Reserved Instance with a different configuration including instance family operating system and tenancy There are no limits to how many times you perform an exchange as long as the target Convertible Reserved Instance is of an equal or higher value than the Convertible Reserved Instances that you are exchanging For more information see Exchanging Convertible Reserved Instances Can be sold in the Reserved Instance MarketplaceCannot be sold in the Reserved Instance Marketplace Standard Reserved Instances typically provide the highest discount levels Oneyear Standard Reserved Instances provide a similar discount to threeyear Convertible Reserved Instances If you want to purchase capacity reservations see OnDemand Capacity Reservations Convertible Reserved Instances are useful when: •Purchasing Reserved Instances in the payer account instead of a subaccount You can more easily modify Convertible Reserved Instances to meet changing needs across your organization •Workloads are likely to change In this case a Convertible Reserved Instance enables you to adapt as needs evolve while still obtaining discounts and capacity reservations •You want to hedge against possible future price drops •You can’t or don’t want to ask teams to do capacity planning or forecasting •You expect compute usage to remain at the committed amount over the commitment period Regional and zonal Reserved Instances When you purchase a Reserved Instance you determine the scope of the Reserved Instance The scope is either regional or zonal •Regional: When you purchase a Reserved Instance for a Region it's referred to as a regional Reserved Instance •Zonal : When you purchase a Reserved Instance for a specific Availability Zone it's referred to as a zonal Reserved Instance Differences between regional and zonal Reserved Instances The following table highlights some key differences between regional Reserved Instances and zonal Reserved Instances: Table 2 – Comparison of regional and zonal Reserved Instances 4Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Limitations for instance size flexibility Regional Reserved InstancesZonal Reserved Instances Availability Zone flexibilityThe Reserved Instance discount applies to instance usage in any Availability Zone in the specified RegionNo Availability Zone flexibility— the Reserved Instance discount applies to instance usage in the specified Availability Zone only Capacity reservationNo capacity reservation—a regional Reserved Instance does not provide a capacity reservationA zonal Reserved Instance provides a capacity reservation in the specified Availability Zone Instance size flexibilityThe Reserved Instance discount applies to instance usage within the instance family regardless of size Only supported on Amazon Linux/Unix Reserved Instances with default tenancy For more information see Instance size flexibility determined by normalization factorNo instance size flexibility— the Reserved Instance discount applies to instance usage for the specified instance type and size only Limitations for instance size flexibility Instance size flexibility does not apply to the following Reserved Instances: •Reserved Instances that are purchased for a specific Availability Zone (zonal Reserved Instances) •Reserved Instances with dedicated tenancy •Reserved Instances for Windows Server Windows Server with SQL Standard Windows Server with SQL Server Enterprise Windows Server with SQL Server Web RHEL and SUSE Linux Enterprise Server •Reserved Instances for G4 instances Maximizing Utilization with Size Flexibility in Regional Reserved Instances For additional flexibility all Regional Linux Reserved Instances with shared tenancy apply to all sizes of instances within an instance family and an AWS Region even if you are using them across multiple accounts via Consolidated Billing The only attributes that must be matched are the instance type (for example m4) tenancy (must be default) and platform (must be Linux) All new and existing Reserved Instances are sized according to a normalization factor based on instance size as follows Table 3 – Regional Reserved Instance sizes and normalization factors Instance size Normalization factor nano 025 micro 05 small 1 5Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Maximizing Utilization with Size Flexibility in Regional Reserved Instances Instance size Normalization factor medium 2 large 4 xlarge 8 2xlarge 16 4xlarge 32 8xlarge 64 9xlarge 72 10xlarge 80 12xlarge 96 16xlarge 128 24xlarge 192 32xlarge 256 For example if you have a Reserved Instance for a c48xlarge it applies to any usage of a Linux c4 instance with shared tenancy in the AWS Region such as: •One c48xlarge instance •Two c44xlarge instances •Four c42xlarge instances •Sixteen c4large instances It also includes combinations of instances for example a t2medium instance has a normalization factor of 2 If you purchase a t2medium default tenancy Amazon Linux/Unix Reserved Instance in the US East (N Virginia) Region and you have two running t2small instances in your account in that Region the billing benefit is applied in full to both instances Figure 1 – Two t2medium instances running in a Region 6Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Normalization factor for dedicated EC2 instances Or if you have one t2large instance running in your account in the US East (N Virginia) Region the billing benefit is applied to 50% of the usage of the instance Figure 2 – One t2large instance running in a Region The normalization factor is also applied when modifying Reserved Instances Normalization factor for dedicated EC2 instances For size inflexible RIs the normalization factor is always 1 The normalization factor doesn't apply to EC2 instances that do not have size flexibility The sole purpose of the normalization factor is to provide an ability to match various EC2 instances to each other within a family so that you can exchange one type for another type We do not support this use case for EC2 instances without size flexibility hence normalization factor is not used and to keep our data model uniform across different EC2 use cases we assign it an equivalent value of 1 Normalization factor for bare metal instances Instance size flexibility also applies to bare metal instances within the instance family If you have regional Amazon Linux/Unix Reserved Instances with shared tenancy on bare metal instances you can benefit from the Reserved Instance savings within the same instance family The opposite is also true: if you have regional Amazon Linux/Unix Reserved Instances with shared tenancy on instances in the same family as a bare metal instance you can benefit from the Reserved Instance savings on the bare metal instance A bare metal instance is the same size as the largest instance within the same instance family For example an i3metal is the same size as an i316xlarge so they have the same normalization factor The metal instance sizes do not have a single normalization factor They vary based on the specific instance family For the most uptodate list see Amazon EC2 Instance Types Table 4 – Bare metal instance sizes and normalization factors Instance size Normalization factor a1metal 32 c5metal 192 c5dmetal 192 7Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Normalization factor for bare metal instances Instance size Normalization factor c5nmetal 144 c6gmetal 128 c6gdmetal 128 g4dnmetal 192 i3metal 128 i3enmetal 192 m5metal 192 m5dmetal 192 m5dnmetal 192 m5nmetal 192 m5znmetal 96 m6gmetal 128 m6gdmetal 128 r5metal 192 r5bmetal 192 r5dmetal 192 r5dnmetal 192 r5nmetal 192 r6gmetal 128 r6gdmetal 128 x2gdmetal 128 z1dmetal 96 For example an i3metal instance has a normalization factor of 128 If you purchase an i3metal default tenancy Amazon Linux/Unix Reserved Instance in the US East (N Virginia) Region the billing benefit can apply as follows: •If you have one running i316xlarge in your account in that Region the billing benefit is applied in full to the i316xlarge instance (i316xlarge normalization factor = 128) •Or if you have two running i38xlarge instances in your account in that Region the billing benefit is applied in full to both i38xlarge instances (i38xlarge normalization factor = 64) •Or if you have four running i34xlarge instances in your account in that Region the billing benefit is applied in full to all four i34xlarge instances (i34xlarge normalization factor = 32) The opposite is also true For example if you purchase two i38xlarge default tenancy Amazon Linux/ Unix Reserved Instances in the US East (N Virginia) Region and you have one running i3metal instance in that Region the billing benefit is applied in full to the i3metal instance 8Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Savings Plans Savings Plans Savings Plans is another flexible pricing model that provides savings of up to 72% on your AWS compute usage This pricing model offers lower prices on Amazon EC2 instances usage regardless of instance family size OS tenancy or AWS Region and also applies to AWS Fargate and AWS Lambda usage Savings Plans offer significant savings over OnDemand Instances just like EC2 Reserved Instances in exchange for a commitment to use a specific amount of compute power (measured in $/hour) for a one or threeyear period You can sign up for Savings Plans for a one or threeyear term and easily manage your plans by taking advantage of recommendations performance reporting and budget alerts in the AWS Cost Explorer AWS offers two types of Savings Plans: •Compute Savings Plans provide the most flexibility and help to reduce your costs by up to 66% (just like Convertible RIs) These plans automatically apply to EC2 instance usage regardless of instance family size AZ Region operating system or tenancy and also apply to Fargate and Lambda usage For example with Compute Savings Plans you can change from C4 to M5 instances shift a workload from EU (Ireland) to EU (London) or move a workload from Amazon EC2 to Fargate or Lambda at any time and automatically continue to pay the Savings Plans price •EC2 Instance Savings Plans provide the lowest prices offering savings up to 72% (just like Standard RIs) in exchange for commitment to usage of individual instance families in a Region (for example M5 usage in N Virginia) This automatically reduces your cost on the selected instance family in that region regardless of AZ size operating system or tenancy EC2 Instance Savings Plans give you the flexibility to change your usage between instances within a family in that Region For example you can move from c5xlarge running Windows to c52xlarge running Linux and automatically benefit from the Savings Plans prices Note that Savings Plans does not provide a capacity reservation You can however reserve capacity with On Demand Capacity Reservations and pay lower prices on them with Savings Plans You can continue purchasing RIs to maintain compatibility with your existing cost management processes and your RIs will work alongside Savings Plans to reduce your overall bill However as your RIs expire we encourage you to sign up for Savings Plans as they offer the same savings as RIs but with additional flexibility 9Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Amazon RDS reserved DB instances Reservation models for other AWS services In addition to Amazon EC2 reservation models are available for Amazon RDS Amazon ElastiCache Amazon ES Amazon Redshift and Amazon DynamoDB Topics •Amazon RDS reserved DB instances (p 10) •Amazon ElastiCache reserved nodes (p 10) •Amazon Elasticsearch Service Reserved Instances (p 10) •Amazon Redshift reserved nodes (p 11) •Amazon DynamoDB reservations (p 11) Amazon RDS reserved DB instances Similar to Amazon EC2 Reserved Instances there are three payment options for Amazon RDS reserved DB instances: No Upfront Partial Upfront and All Upfront All reserved DB instance types are available for Aurora MySQL MariaDB PostgreSQL Oracle and SQL Server database engines Sizeflexible reserved DB instances are available for Amazon Aurora MariaDB MySQL PostgreSQL and the “Bring Your Own License” (BYOL) edition of the Oracle database engine For more information about Amazon RDS reserved DB instances see the following: •Amazon RDS Reserved Instances •Working with Reserved DB Instances •Amazon DynamoDB Pricing Amazon ElastiCache reserved nodes Amazon ElastiCache reserved nodes give you the option to make a low onetime payment for each cache node you want to reserve In turn you receive a significant discount on the hourly charge for that node Amazon ElastiCache provides three reserved cache node types (Light Utilization Medium Utilization and Heavy Utilization) that enable you to balance the amount you pay up front with your effective hourly price Based on your application workload and the amount of time you plan to run them Amazon ElastiCache Reserved Nodes might provide substantial savings over running ondemand Nodes Reserved Cache Nodes are available for both Redis and Memcached For more information see Amazon ElastiCache Reserved Nodes Amazon Elasticsearch Service Reserved Instances Amazon Elasticsearch Service (Amazon ES) Reserved Instances (RIs) offer significant discounts compared to standard OnDemand Instances The instances themselves are identical—RIs are just a billing discount 10Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Amazon Redshift reserved nodes applied to OnDemand Instances in your account For longlived applications with predictable usage RIs can provide considerable savings over time Amazon ES RIs require one or threeyear terms and have three payment options that affect the discount rate For more information see Amazon Elasticsearch Service Reserved Instances Amazon Redshift reserved nodes In AWS the charges that you accrue for using Amazon Redshift are based on compute nodes Each compute node is billed at an hourly rate The hourly rate varies depending on factors such as AWS Region node type and whether the node receives ondemand node pricing or reserved node pricing If you intend to keep an Amazon Redshift cluster running continuously for a prolonged period you should consider purchasing reservednode offerings These offerings provide significant savings over on demand pricing However they require you to reserve compute nodes and commit to paying for those nodes for either a oneyear or a threeyear duration For more information about Amazon Redshift reserved node pricing see Reserved Instance Pricing and Purchasing Amazon Redshift Reserved Nodes Amazon DynamoDB reservations If you can predict your need for Amazon DynamoDB readandwrite throughput reserved capacity offers significant savings over the normal price of DynamoDB provisioned throughput capacity You pay a onetime upfront fee and commit to paying for a minimum usage level at specific hourly rates for the duration of the reserved capacity term Any throughput you provision in excess of your reserved capacity is billed at standard rates for provisioned throughput Provisioned capacity mode might be best if you •Have predictable application traffic •Run applications whose traffic is consistent or ramps gradually •Can forecast capacity requirements to control costs For more information see Pricing for Provisioned Capacity 11Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Usage billing Reserved Instances billing All Reserved Instances provide you with a discount compared to OnDemand Instance pricing With Reserved Instances you pay for the entire term regardless of actual use You can choose to pay for your Reserved Instance upfront partially upfront or monthly depending on the payment option specified for the Reserved Instance When Reserved Instances expire you are charged OnDemand Instance rates You can queue a Reserved Instance for purchase up to three years in advance This can help you ensure that you have uninterrupted coverage For more information see Queuing your purchase You can set up a billing alert to warn you when your bill exceeds a threshold that you define For more information see Monitoring Charges with Alerts and Notifications Usage billing Except for DynamoDB reservations which are billed based on throughput reservations are billed for every clockhour during the term you select regardless of whether an instance is running or not A clock hour is defined as the standard 24hour clock that runs from midnight to midnight and is divided into 24 hours (for example 1:00:00 to 1:59:59 is one clockhour) A Reserved Instance billing benefit can be applied to a running instance on a persecond basis Per second billing is available for instances using an opensource Linux distribution such as Amazon Linux and Ubuntu Perhour billing is used for commercial Linux distributions such as Red Hat Enterprise Linux and SUSE Linux Enterprise Server A Reserved Instance billing benefit can apply to a maximum of 3600 seconds (one hour) of instance usage per clockhour You can run multiple instances concurrently but can only receive the benefit of the Reserved Instance discount for a total of 3600 seconds per clockhour Instance usage that exceeds 3600 seconds in a clockhour is billed at the OnDemand Instance rate For example if you purchase one m4xlarge Reserved Instance and run four m4xlarge instances concurrently for one hour one instance is charged at one hour of Reserved Instance usage and the other three instances are charged at three hours of OnDemand Instance usage However if you purchase one m4xlarge Reserved Instance and run four m4xlarge instances for 15 minutes (900 seconds) each within the same hour the total running time for the instances is one hour which results in one hour of Reserved Instance usage and 0 hours of OnDemand Instance usage 12Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Consolidated billing Figure 3 – Running four instances for 15 minutes each in the same hour If multiple eligible instances are running concurrently the Reserved Instance billing benefit is applied to all the instances at the same time up to a maximum of 3600 seconds in a clockhour Thereafter the On Demand Instance rates apply Figure 4 – Running four instances concurrently over the hour You can find out about the charges and fees to your account by viewing the AWS Billing and Cost Management console You can also examine your utilization and coverage and receive reservation purchase recommendations via AWS Cost Explorer You can dive deeper into your reservations and Reserved Instance discount allocation via the AWS Cost and Usage Report For more information on Reserved Instance usage billing see Usage Billing Consolidated billing AWS Organizations is an account management service that lets you consolidate multiple AWS accounts into an organization that you create and centrally manage AWS Organizations includes consolidated billing and account management capabilities that enable you to better meet the budgetary security and compliance needs of your business For more information see What Is AWS Organizations? For more information on consolidated bills and how they are calculated see Understanding Consolidated Bills The pricing benefits of Reserved Instances are shared when the purchasing account is billed under a consolidated billing payer account The instance usage across all member accounts is aggregated in the payer account every month This is useful for companies that have different functional teams or groups then the normal Reserved Instance logic is applied to calculate the bill Reserved Instances: Capacity reservations AWS also offers discounted hourly rates in exchange for an upfront fee and term contract Services such as Amazon EC2 and Amazon RDS use this approach to sell reserved capacity for hourly use of Reserved Instances For more information see Reserved Instances in the Amazon EC2 User Guide for Linux Instances and Working with Reserved DB Instances in the Amazon Relational Database Service User Guide 13Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Blended rates When you reserve capacity with Reserved Instances your hourly usage is calculated at a discounted rate for instances of the same usage type in the same Availability Zone (AZ) When you launch additional instances of the same instance type in the same Availability Zone and exceed the number of instances in your reservation AWS averages the rates of the Reserved Instances and the OnDemand Instances to give you a blended rate Blended rates A line item for the blended rate of that instance is displayed on the bill of any member account that is running an instance that matches the specifications of a reservation in the organization The payer account of an organization can turn off Reserved Instance sharing for member accounts in that organization via the AWS Billing Preferences This means that Reserved Instances are not shared between that member account and other member accounts Each estimated bill is computed using the most recent set of preferences For information on how to configure sharing see Turning Off Reserved Instance Sharing How discounts are applied The application of Amazon EC2 Reserved Instances is based on instance attributes including the following: •Instance type – Instance types comprise varying combinations of CPU memory storage and networking capacity (for example m4xlarge) This gives you the flexibility to choose the appropriate mix of resources for your applications such as computeoptimized storageoptimized and so on Each instance type includes one or more instance sizes enabling you to scale your resources to the requirements of your target workload •Platform – You can purchase Reserved Instances for Amazon EC2 instances running Linux Unix SUSE Linux Red Hat Enterprise Linux Windows Server and Microsoft SQL Server platforms •Tenancy – Reserved Instances can be default tenancy or dedicated tenancy •Regional or zonal – See Regional and zonal Reserved Instances (p 4) If you purchase a Reserved Instance and you already have a running instance that matches the attributes of the Reserved Instance the billing benefit is immediately applied You don’t have to restart your instances If you do not have an eligible running instance launch an instance and ensure that you match the same criteria that you specified for your Reserved Instance For more information see Using Your Reserved Instances 14Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Measure success Maximizing the value of reservations This section discusses how you can maximize the value of your reservations Topics •Measure success (p 15) •Maximize discounts by standardizing instance type (p 15) •Reservation management techniques (p 16) •Reserved Instance Marketplace (p 16) •AWS Cost Explorer (p 16) •AWS Cost and Usage Report (p 17) •AWS Trusted Advisor (p 18) Measure success Making the most of reservations means measuring your reservation coverage (portion of instances enjoying reservation discount benefits) and reservation utilization (degree to which purchased Reserved Instances are used) Establish a standardized review cadence in which you focus on the following questions: •Do you need to modify any of our existing reservations to increase utilization? •Are any currently utilized reservations expiring? •Do you need to purchase any reservations to increase your coverage? A standardized review cadence ensures that issues are surfaced and addressed in a timely manner As your RIs expire we encourage you to sign up for Savings Plans as they offer the same savings as RIs but with additional flexibility Maximize discounts by standardizing instance type By standardizing the instance types that your organization uses you can ensure that deployments match the characteristics of your reservations to maximize your discounts Standardization maximizes utilization and minimizes the level of effort associated with management of reservations Three services that can help you standardize your instances are: •AWS Config – Enables you to assess audit and evaluate the configurations of your AWS resources AWS Config continuously monitors and records your AWS resource configurations and lets you automate the evaluation of recorded configurations against desired configurations •AWS Service Catalog – Lets you create and manage catalogs of IT services that are approved for use on AWS These IT services can include everything from virtual machine (VM) images servers software and databases to complete multitier application architecture •AWS Compute Optimizer Recommends optimal AWS compute resources for your workloads to reduce costs and improve performance by using Machine Learning algorithms to analyze historical utilization metrics The Compute Optimizer focuses on the configuration and resource utilization of your workload to identify dozens of defining characteristics such as whether a workload is CPU intensive exhibits a daily pattern or accesses local storage frequently The service processes these characteristics and identifies the hardware resource headroom required by the workload It also infers 15Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Reservation management techniques how the workload would have performed on various hardware platforms (for example Amazon EC2 instances types) and offers recommendations Reservation management techniques You can manage reservations either by using a central IT operations or management team or by using a specific team or business unit The following table summarizes the different reservation management techniques Table 5 – Comparison of different reservation management techniques Central reservation management Team/Business Unit reservation management Maximizes reservation coverage by covering aggregate usage across a businessIncreases likelihood of high reservation utilization (for example using alreadypurchased reservations) because a single team should understand its capacity commitment of RIs Simplifies overall reservation management especially when combining central management and Convertible Reserved InstancesReduces interfacing or planning between the business unit and the central team Reduces the requirement for an individual team to understand reservationsStreamlines decisions about purchases purchase process and reservation account location Reserved Instance Marketplace Reserved Instance Marketplace supports the sale of thirdparty and AWS customers' unused Standard Reserved Instances which vary in term lengths and pricing options For example you might want to sell Reserved Instances after moving instances to a new AWS Region changing to a new instance type ending projects before the term expiration when your business needs change or if you have unneeded capacity If you want to sell your unused Reserved Instances on the Reserved Instance Marketplace you must meet certain eligibility criteria For more information see Reserved Instance Marketplace AWS Cost Explorer AWS Cost Explorer lets you visualize understand and manage your AWS costs and usage over time You can analyze your cost and usage data at a high level (for example total costs and usage across all accounts in your organization) or for highly specific requests (for example m22xlarge costs within account Y that are tagged project: secretProject ) You can dive deeper into your reservations using the Reserved Instance utilization and coverage reports Using these reports you can set custom Reserved Instance utilization and coverage targets and visualize progress toward your goals From there you can refine the underlying data using the available filtering dimensions (for example account instance type scope and more) AWS Cost Explorer provides the following prebuilt reports: •EC2 RI Utilization % offers relevant data to identify and act on opportunities to increase your Reserved Instance usage efficiency It’s calculated by dividing Reserved Instance hours used by the total Reserved Instance purchased hours 16Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper AWS Cost and Usage Report •EC2 RI Coverage % shows how much of your overall instance usage is covered by Reserved Instances This lets you make informed decisions about when to purchase or modify a Reserved Instance to ensure maximum coverage It’s calculated by dividing Reserved Instance hours used by the total EC2 OnDemand and Reserved Instance hours Also AWS Cost Explorer provides Reserved Instance purchase recommendations for zonal and sizeflexible Reserved Instances to help payer accounts achieve greater cost efficiencies For more information see AWS Cost Explorer AWS Cost and Usage Report The AWS Cost and Usage Report contains the most comprehensive set of data about your AWS costs and usage including additional information regarding AWS services pricing and reservations By using the AWS Cost and Usage report you can gain a wealth of reservationrelated insights about the Amazon Resource Name (ARN) for a reservation the number of reservations the number of units per reservation and more It can help you do the following: •Calculate savings – Each hourly line item of usage contains the discounted rate that was charged in addition to the public OnDemand Instance rate for that usage type at that time You can quantify your savings by calculating the difference between the public OnDemand Instance rates and the rates you were charged •Track the allocation of Reserved Instance discounts – Each line item of usage that receives a discount contains information about where the discount came from This makes it easier to trace which instances are benefitting from specific reservations These reports update up to three times per day Reserved Instances on your cost and usage report The Fee line item is added to your bill when you purchase an All Upfront or Partial Upfront Reserved Instance as shown Figure 5 – Fee line item from AWS Cost and Usage Report The RI Fee line item describes the recurring monthly charges that are associated with Partial Upfront and No Upfront Reserved Instances The RI Fee is calculated by multiplying your discounted hourly rate by the number of hours in the month as shown 17Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper AWS Trusted Advisor Figure 6 – RI Fee line item from AWS Cost and Usage Report The Discounted Usage line item describes the instance usage that received a matching Reserved Instance discount benefit It’s added to your bill when you have usage that matches one of your Reserved Instances as shown Figure 7 – Discounted Usage line item from AWS Cost and Usage Report AWS Trusted Advisor AWS Trusted Advisor is an online resource to help you reduce cost increase performance and improve security by optimizing your AWS environment AWS Trusted Advisor provides realtime guidance to help you provision your resources following AWS best practices To help you maximize utilization of Reserved Instances AWS Trusted Advisor checks your Amazon EC2 computingconsumption history and calculates an optimal number of Partial Upfront Reserved Instances Recommendations are based on the previous calendar month's hourbyhour usage aggregated across all consolidated billing accounts Note that Trusted Advisor does not provide sizeflexible Reserved Instance recommendations For more information about how the recommendation is calculated see "Reserved Instance Optimization Check Questions" in the Trusted Advisor FAQs 18Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Conclusion Effectively planned and managed reservations can help you achieve significant discounts for AWS workloads that run on a predictable schedule It’s important to analyze your current AWS usage to select the right reservation attributes from the start and to devise a longerterm strategy for monitoring and managing your Reserved Instances Using tools such as the AWS Compute Optimizer AWS Cost and Usage report and the Reserved Instance Utilization and Coverage reports in AWS Cost Explorer you can examine your overall usage and discover opportunities for greater cost efficiencies 19Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Contributors Contributors to this document include: •Pritam Pal Senior Specialist Solution Architect EC2 Spot Amazon Web Services 20Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Document revisions To be notified about updates to this whitepaper subscribe to the RSS feed updatehistorychange updatehistorydescription updatehistorydate Updated bare metal instance types and normalization factors Removed link to Scheduled Instances (p 21)Minor update March 29 2021 Updated Reserved Instances billing information and normalization factors Savings Plan section added (p 21)Whitepaper updated August 31 2020 Initial publication (p 21) Whitepaper published March 1 2018 21Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2021 Amazon Web Services Inc or its affiliates All rights reserved 22
General
AWS_Serverless_MultiTier_Architectures_Using_Amazon_API_Gateway_and_AWS_Lambda
AWS Serverless Multi Tier Architectures With Amazon API Gateway and AWS Lambda First Published November 2015 Updated Octo ber 20 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2021 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 Three tier architecture overview 2 Serverless logic tier 3 AWS Lambda 3 API Gateway 6 Data tier 11 Presentation tier 14 Sample architecture patterns 15 Mobile backend 16 Single page application 17 Web application 19 Microservices with Lambda 20 Conclusion 21 Contributors 21 Further reading 22 Document revisions 22 Abstract This whitepaper illustrates how innovations from Amazon Web Services (AWS) can be used to chang e the way you design multi tier architectures and implement popular patterns such as microservices mobile backends and single page applications Architects and developers can use Amazon API Gateway AWS Lambda and other services to reduce the developmen t and operations cycles required to create and manage multi tiered applications Amazon Web Services AWS Serverless Multi Tier Architectures Page 1 Introduction The multi tier application (three tier ntier and so forth) has been a cornerstone architecture pattern for decades and remains a popular pattern for user facing applications Although the language used to describe a multi tier architecture varies a multi tier application generally consists of the following components: • Presentation tier – Component that the user directly interacts w ith (for example webpage s and mobile app UI s) • Logic tier – Code required to translate user actions to application functionality (for example CRUD database operations and data processing) • Data tier – Storage media ( for example databases object stores caches and file systems) that hold the data relevant to the application The multi tier architecture pattern provides a general framework to ensure decoupled and independently scalable application components can be separately developed managed a nd maintained (often by distinct teams) As a consequence of this pattern in which the network (a tier must make a network call to interact with another tier) acts as the boundary between tiers developing a multi tier application often requires creating m any undifferentiated application components Some of these components include: • Code that defines a message queue for communication between tiers • Code that defines an application programming interface (API) and a data model • Security related code that ensures appropriate access to the application All of these examples can be considered “boilerplate” components that while necessary in multi tier applications do not vary greatly in their implementation from one application to the next AWS offers a numb er of services that enable the creation of serverless multi tier applications —greatly simplifying the process of deploying such applications to production and removing the overhead associated with traditional server management Amazon API Gateway a service for creating and managing APIs and AWS Lambda a service for running arbitrary code functions can be used together to simplify the creation of robust multi tier applications Amazon Web Services AWS Serverless Multi Tier Architectures Page 2 API Gateway’ s integration with AWS Lambda enable s userdefined code function s to be initiated directl y through HTT PS requests Regardle ss of the request volume both API Gatewa y and Lambda scale automaticall y to support exactl y the need s of your application (refe r to Amazon API Gatewa y quota s and important notes for scalability information) By combining these two services you can create a tie r that enables you to write onl y the code that matte rs to you r application and not focu s on variou s other undifferentiating aspect s of implementing a multitiered architecture such a s architecting for high availability writing client SDKs server and operating syste m (OS) management scaling and implementing a client authorization mechanism API Gatewa y and Lambda enable the creation of a serverle ss logic tier Depending on your application requirements AW S also provide s option s to create a serverless presentation tier (for example with Amazon CloudFront and Amazon Simple Storage Service (Amazon S3 ) and data tier (for example Amazon Aurora and Amazon DynamoDB ) This whitepaper focuses on the most popular example of a multitiered architecture the threetier web application However you can apply this multitier pattern well beyond a typical threetier web application Threeti er architectur e overview The threetie r architecture i s the most popula r implementation of a multitier architecture and consist s of a single presentation tier a logic tier and a data tier The following illustration show s an example of a simple generi c threetie r application Architectural pattern for a three tier application There are many great online resources where you can learn more about the general three tier architecture pattern This whitepaper focuses on a specific implementation pattern for this architecture using API Gateway and Lambda Amazon Web Services AWS Serverless Multi Tier Architectures Page 3 Serverless logic tier The logic tier of the three tier architecture represents the brains of the application This is where using API Gateway a nd AWS Lambda can have the most impact compared to a traditional server based implementation The features of these two services enable you to build a serverless application that is highly available scalable and secure In a traditional model your appl ication could require thousands of servers; however by using Amazon API Gateway and AWS Lambda you are not responsible for server management in any capacity In addition by using these managed services together you gain the following benefits: • Lambda o No OS to choose secure patch or manage o No servers to right size monitor or scale o Reduced risk to your cost from overprovisioning o Reduced risk to your performance from under provisioning • API Gateway o Simplified mechanisms to deploy monitor and secure APIs o Improved API performance through caching and content delivery AWS Lambda AWS Lambda is a compute service that enable s you to run arbitrary code functions in any of the supported languages (Nodejs Python Ruby Java Go NET For more informa tion refer to Lambda FAQs ) without provisioning managing or scaling servers Lambda functions are run in a managed isolated container and are launched in response to an event which can be one of several programmatic triggers that AWS makes available called an event source Refer to Lambda FAQs for all event sources Many popular use cases for Lambda r evolve around event driven data processing workflows such as processing files stor ed in Amazon S3 or streaming data records from Amazon Kinesis When used in conjunc tion with API Gateway a Lambda function performs the functionality of a typical web service: it initiates code in response to a client HTTPS request ; API Gateway acts as the front door for your logic tier and Lambda invokes the application code Amazon Web Services AWS Serverless Multi Tier Architectures Page 4 Your business logic goes here no servers necessary Lambda requires that you to write code functions called handlers which will run when initiat ed by an event To use Lambda with API Gateway you can configure API Gateway to launch handler functions when an HTTPS request to your API occurs In a serverless multi tier architecture each of the APIs you create in API Gateway will integrate with a Lambda function (and the handler within) that invok es the business logic required Using AWS Lambda functions to compose the logic tier enable s you to define a desired level of granularity for exposing the application functionality (one Lambda function per API or one Lambda function per API method) Inside the Lambda function the handler can reach out to any other dependencies ( for example other methods you’ve uploaded with your code libraries native binaries and external web services) or even other Lambda functions Creating or updating a Lambda function requires either uploadin g code as a Lambda deployment package in a zip file to an Amazon S3 bucket or packaging code as a container image along with all the dependencies The functions can use different deployment methods such as AWS Management Console running AWS Command Line Interface (CLI) or running infrastructure as code template s or framework s such as AWS CloudFormation AWS Serverless Application Model (AWS SAM) or AWS Cloud Development Kit (AWS CDK) When you create your function using any of these methods you specify which method inside your deployment package will act as the request handler You can reuse the same deployment package for multiple Lambda function definitions where each Lambda functio n might have a unique handler within the same deployment package Lambda security To run a Lambda function it must be invoked by an event or service that is permitted by an AWS Identity and Access Management (IAM) policy Using IAM policies you can create a Lambda function that cannot be initiated at all unless it is invoked by an API Gateway resource that you define Such policy can be defined using resource based policy across various AWS services Each Lambda function assumes an IAM role that is assigned when the Lambda function is deployed This IAM role defines the other AWS services and resources your Lambda function can interact with ( for example Amazon DynamoDB table and Amazon S3) In context of Lambda function this is called an execution role Amazon Web Services AWS Serverless Multi Tier Architectures Page 5 Do not store sensitive information inside a Lambda function IAM handles access to AWS services through the Lambda execution role; if you need to access other credentials ( for example database credentials and API keys) from inside your Lambda function you can use AWS Key Management Service (AWS KMS) with environment variables or use a service such as AWS Secrets Manager to keep this information safe when not in use Performance at scale Code pulled in as a container image from Amazon Elastic Container Registry (Amazon ECR) or from a zip file uploaded to Amazon S3 runs in an isolated environment managed by AWS You do not have to scale your Lambda functions —each time an event notification is received by your function AWS Lambda locates available capacity within its compute fleet and runs your code with runtime memory disk and timeout configurations that you define With this pattern AWS can start as many copies of your function as needed A Lambda based logic tier is always right sized for your customer needs The ability to quickly absorb surges in traffic through managed scaling and concurrent code initiation combined with Lambda payperuse pricing enables you to always meet customer requests while simultaneously not paying for idle compute capacity Serverless deployment and management To help you deploy and manage your Lambda functions use AWS Serverless Application Model (AWS SAM ) an open source framework that includes : • AWS SAM template specification – Syntax used to define your functions and describe their environments permissions configurations and events for simplified upload and deploym ent • AWS SAM CLI – Commands that enable you to verify AWS SAM template syntax invoke functions locally debug Lambda functions and deployment package functions You c an also use AWS CDK which is a software development framework for defining cloud infrastructure using programming languages and provisioning it through CloudFormation AWS CDK provides an imperative way to define AWS resources whereas AWS SAM provides a declarative way Amazon Web Services AWS Serverless Multi Tier Architectures Page 6 Typically when you deploy a Lambda function it is invok ed with permissions defined by its assigned IAM role and is able to reach internet facing endpoints As the core of your logic tier AWS Lambda is the component directly integrating w ith the data tier If your data tier contains sensitive business or user information it is important to ensure that this data tier is appropriately isolated (in a private subnet) You can configure a Lambda function to connect to private subnets in a virt ual private cloud (VPC) in your AWS account if you want the Lambda function to access resources that you cannot expose publicly like a private database instance When you connect a function to a VPC Lambda creates an elastic network interface for each subnet in your function's VPC configuration and elastic network interface is used to access your internal resources privately Lambda architecture pattern inside a VPC The use of Lambda with VPC means that databases and other storage media that your business logic depends on can be made inaccessible from the internet The VPC also ensures that the only way to interact with your data from the internet is through the APIs that you’ve defined and the Lambda code functions that you have written API Gateway API Gateway is a fully managed service that enables developers to create publish maintain monitor and secure APIs at any scale Amazon Web Services AWS Serverless Multi Tier Architectures Page 7 Clients ( that is presentation tier s) integrate with the APIs exposed through API Gateway using standard HTTPS requests The applicability of APIs exposed through API Gateway to a service oriented multi tier architecture is the ability to separate individual pieces of appli cation functionality and expose this functionality through REST endpoints API Gateway has specific features and qualities that can add powerful capabilities to your logic tier Integration with Lambda Amazon API Gateway supports both REST and HTTP type s of APIs An API Gateway API is made up of resources and methods A resource is a logical entity that an app can access through a resource path ( for example /tickets ) A method corresponds to an API request that is submitted to an API resource ( for example GET /tickets ) API Gateway enable s you to back each method with a Lambda function that is when you call the API through the HTTPS endpoint exposed in API Gateway API Gateway invokes the Lam bda function You can connect API Gateway and Lambda functions using proxy integrations and non proxy integrations Proxy integrations In a proxy integration the entire client HTTPS request is sent asis to the Lambda function API Gateway passes the enti re client request as the event parameter of the Lambda handler function and the output of the Lambda function is returned directly to the client (including status code headers and so forth) Nonproxy integrations In a nonproxy integration you configure how the parameters headers and body of the client request are passed to the event parameter of the Lambda handler function Additionally you configure how the Lambda output is translated back to the user Note : API Gateway can also proxy to ad ditional serverless resources outside of AWS Lambda such as mock integrations (useful for initial application development) and direct proxy to S3 objects Amazon Web Services AWS Serverless Multi Tier Architectures Page 8 Stable API performance across regions Each deployment of API Gateway includes a Amazon CloudFront distribution under the hood CloudFront is a content delivery service that uses Amazon’s global network of edge locations as connection points for clients using your API This helps decrease the response lat ency of your API By using multiple edge locations across the world CloudFront also provides capabilities to combat distributed denial of service (DDoS) attack scenarios For more information review the AWS Best Practices for DDoS Resiliency whitepaper You can improve the performance of specific API requests by using API Gateway to store responses in an optional in memory cache This approach not only provides performance benefits for repeated API requests but it also reduces the number of times your Lambda functions are invoked which can reduce your overall cost Encourage innovation and reduce overhead with builtin features The development cost to build any new application is an investment Using API Gateway can reduce the amount of time required for certain development tasks and lower the total development cost enab ling organizations to more freely experiment and innovate During initial application development phases implementation of logging and metrics gathering are often neglected to deliver a new application more quickly This can lead to technical debt and operational risk when deploying these features to an applicati on running in production API Gateway integrates seamlessly with Amazon CloudWatch which collects and processes raw data from API Gateway into readable near real time metrics for monitoring API implement ation API Gateway also supports access logging with configurable reports and AWS X Ray tracing for debugging Each of these features requires no code to be written and can be adjusted in applications running in production without risk to the core business logic The overall lifetime of an application m ight be unknown or it m ight be known to be short lived Creating a business case for building such applications can be made easier if your starting point alread y includes the managed features that API Gateway provides and if you only incur infrastructure costs after your APIs begin receiving requests For more information refer to Amazon API Gateway pr icing Amazon Web Services AWS Serverless Multi Tier Architectures Page 9 Iterate rapidly stay agile Using API Gateway and AWS Lambda to build the logic tier of your API enables you to quickly adapt to the changing demands of your user base by simplifying API deployment and version management Stage deployment When you deploy an API in API Gateway you must associate the deployment with an API Gateway stage—each stage is a snapshot of the API and is made available for client apps to call Using this convention you can easily deploy apps to dev test stage or prod stages and move deployments between stages Each time you deploy your API to a stage you create a different version of the API which can be r everted if necessary These features enable existing functionality and client dependencies to continue undis turbed while new functionality is released as a separate API version Decouple d integration with Lambda The integration between API in API Gateway and Lambda function can be decoupled using API Gateway stage variables and a Lambda function alias This simp lifies and speeds up the API deployment Instead of configuring the Lambda function name or alias in the API directly you can configure stage variable in API which can point to a particular alias in the Lambda function During deployment change the stage variable value to point to a Lambda function alias and API will run the Lambda function version behind the Lambda alias for a particular stage Canary release deployment Canary release is a software development strategy in which a new version of an API is deployed for testing purposes and the base version remains deployed as a production release for normal operations on the same stage In a canary release deployment tota l API traffic is separated at random into a production release and a canary release with a preconfigured ratio APIs in API Gateway can be configured for the canary release deployment to test new features with a limited set of users Custom domain names You can provide an intuitive business friendly URL name to API in stead of the URL provided by API Gateway API Gateway provides features to configure custom domain for the APIs With custom domain names you can set up your API's hostname and choose a multi level base path (for example myservice myservice/cat/v1 or myservice/dog/v2 ) to map the alternative URL to your API Amazon Web Services AWS Serverless Multi Tier Architectures Page 10 Prioritize API security All applications must ensure that only authorized clients have access to their API resources When designing a multi tier application you can take advantage of several different ways in which API Gateway contributes to securing your logic tier : Transit security All requests to your APIs can be made through HTTPS to enable encryption in transit API Gateway provide s built in SSL/TLS Certificates —if using the custom domain name option for public APIs you can provide your own SSL/TLS certificate using AWS Certificate Manager API Gateway also supports mutual TLS (mTLS) authentication Mutual TLS enhances the security of your API and helps protect your data from attacks such as client spoofing or man inthe middle attacks API authorization Each resource and method combination that you create as part of your A PI is granted a unique Amazon Resource Name (ARN) that can be referenced in AWS Identity and Access Management ( IAM) policies There are three general methods to add authorization to an API in API Gateway: • IAM roles and policies Clients use AWS Signature Version 4 (SigV4) authorization and IAM policies for API access The same credentials can restrict or permit access to other AWS services and resources as ne eded ( for example S3 buckets or Amazon DynamoDB tables) • Amazon Cognito user pools Clients sign in through an Amazon Cognito user pool and obtain tokens which are included in the authorization header of a request • Lambda authorizer Define a Lambda function that implements a custom authorization scheme that uses a bearer token strategy ( for example OAuth and SAML) or uses request par ameters to identify users Access restrictions API Gateway supports the generation of API keys and association of these keys with a configurable usage plan You can monitor API key usage with CloudWatch API Gateway supports throttling rate limits and bu rst rate limits for each method in your API Amazon Web Services AWS Serverless Multi Tier Architectures Page 11 Private APIs Using API Gateway you can create private REST APIs that can only be accessed from your virtual private cloud in Amazon VPC by using an interface VPC endpoint This is an endpoint network interface that you create in your VPC Using resource policies you can enable or deny access to your API from selected VPCs and VPC endpoints including across AWS accounts Each endpoint can be used to access multiple private APIs You can also use AWS Direct Connect to establish a connection from an on premises network to Amazon VPC and access your private API over that connection In all cases traffic to your private API uses secure connections and does not leave the Amazon network —it is isolated from the public internet Firewall protection using AWS WAF Internet facing APIs ar e vulnerable to malicious attacks AWS WAF is a we b application firewall which helps protect APIs from such attacks It protects APIs from common web exploits such as SQL injection and cross site scripting attacks You can use AWS WAF with API Gateway to help protect APIs Data tier Using AWS Lambda as your logic tier does not limit the data storage options available in your data tier Lambda functions connect to any data storage option by including the appropriate database driver in the Lambda deployment package and use IAM role based access or encrypted credentials ( through AWS KMS or Secrets Manager) Choosing a data store for your a pplication is highly dependent on your application requirements AWS offers a number of serverless and non serverless data stores that you can use to compose the data tier of your application Serverless data storage options • Amazon S3 is an object storage service that offers industry leading scalability data availability security and performance Amazon Web Services AWS Serverless Multi Tier Architectures Page 12 • Amazon Aurora is a MySQL compatible and PostgreSQL compatible relational database built for the cloud that combines the performance and availability of traditional enterprise databases with the simplicity and cost effectiveness of open source databases Aurora offers both serverless and traditional usage models • Amazon DynamoDB is a key value and document database that delivers single digit millisecond performance at any scale It is a fully manag ed serverless multi region durable database with built in security backup and restore and in memory caching for internet scale applications • Amazon Timestream is a fast scalable fully managed time se ries database service for IoT and operational applications that makes it simple to store and analyze trillions of events per day at 1/10th the cost of relational databases Driven by the rise of IoT devices IT systems and smart industrial machines time series data —data that measures how things change over time —is one of the fastest growing data types • Amazon Quantum Ledger Database (Amazon QLDB) is a fully managed ledger database that provides a transparent im mutable and cryptographically verifiable transaction log owned by a central trusted authority Amazon QLDB tracks each and every application data change and maintains a complete and verifiable history of changes over time • Amazon Keyspaces (for Apache Cassandra) is a scalable highly available and managed Apache Cassandra –compatible database service With Amazon Keyspaces you can run your Cassandra workloads on AWS using the same Cassandra application co de and developer tools that you use today You don’t have to provision patch or manage servers and you don’t have to install maintain or operate software Amazon Keyspaces is serverless so you pay for only the resources you use and the service can au tomatically scale tables up and down in response to application traffic Amazon Web Services AWS Serverless Multi Tier Architectures Page 13 • Amazon Elastic File System (Amazon EFS) provides a simple serverless set andforget elastic file system that lets you share file data without provisioning or managing storage It can be used with AWS Cloud services and on premises resources and is built to scale on demand to petabytes without disrupting applications With Amazon EFS you can grow and shrink your file systems automa tically as you add and remove files eliminating the need to provision and manage capacity to accommodate growth Amazon EFS can be mounted with Lambda function which makes it a viable file storage option for APIs Nonserverless data storage options • Amazon Relational Database Service (Amazon RDS) is a managed web service that enables you to set up operate and scale a relational database using several engines (Aurora PostgreSQL MySQL MariaDB Oracle and Micro soft SQL Server) and running on several different database instance types that are optimized for memory performance or I/O • Amazon Redshift is a fully managed petabyte scale data warehouse service in the c loud • Amazon ElastiCache is a fully managed deployment of Redis or Memcached Seamlessly deploy run and scale popular open source compatible in memory data stores • Amazon Neptun e is a fast reliable fully managed graph database service that makes it simple to build and run applications that work with highly connected datasets Neptune supports popular graph models —property graphs and W3C Resource Description Framework (RDF)—and their respective query languages enabl ing you to easily build queries that efficiently navigate highly connected datasets • Amazon DocumentDB (with MongoDB compatibi lity) is a fast scalable highly available and fully managed document database service that supports MongoDB workloads • Finally you can also use data stores running independently on Amazon EC2 as the data tier of a multi tier application Amazon Web Services AWS Serverless Multi Tier Architectures Page 14 Presentation tier The presentation tier is responsible for interacting with the logic tier through the API Gateway REST endpoints exposed over the internet Any HTTPS capable client or device can communicate with these endpoints giving your presentation tier the flexibility to take many forms (desktop applications mobile apps webpages IoT devices and so forth) Depending on your requirements your presentation tier can use the following AWS serverless offerings: • Amazon Cognito – A serverless user identity and data synchronization service that enable s you to add user sign up sign in and access control to your web and mobile apps quickly and efficien tly Amazon Cognito scales to millions of users and supports sign in with social identity providers such as Facebook Google and Amazon and enterprise identity providers through SAML 20 • Amazon S3 with CloudFront – Enables you to serve static websites such as single page applications directly from an S3 bucket without requiring provision of a web server You can use CloudFront as a managed content delivery network (CDN ) to improve performance and enable SSL/TL using managed or custom certificates AWS Amplify is a set of tools and services that can be used together or on their own to help front end web and mobile developers build scalable full stack applications powered by AWS Amplify offers a fully ma naged service for deploying and hosting static web applications globally served by Amazon's reliable CDN with hundreds of points of presence globally and with built in CI/CD workflows that accelerate your application release cycle Amplify supports popula r web frameworks including JavaScript React Angular Vue Nextjs and mobile platforms including Android iOS React Native Ionic and Flutter Depending on your networking configurations and application requirements you m ight need to enable your API Gateway APIs to be cross origin resource sharing (CORS) – compliant CORS compliance allows web browsers to directly invoke your APIs from within static webpages When you deploy a website with CloudFront you are provided a CloudFront domain name to reach your application ( for example d2d47p2vcczkh2cloudfrontnet ) You can use Amazon Route 53 to register domain names and direct them to your CloudFront distribution or direct already owned domain names t o your CloudFront distribution This enable s users to access your site using a familiar domain name Note Amazon Web Services AWS Serverless Multi Tier Architectures Page 15 that you can also assign a custom domain name using Route 53 to your API Gateway distribution which enable s users to invoke APIs using familiar domai n names Sample architecture patterns You can implement popular architecture patterns using API Gateway and AWS Lambda as your logic tier This whitepaper includes the most popular architecture patterns that use AWS Lambda based logic tier s: • Mobile backend – A mobile application communicates with API Gateway and Lambda to access application data This pattern can be extended to generic HTTPS clients that don’t use serverless AWS resources to host presentation tier resources ( such as desktop clients web ser ver running on EC2 and so forth) • Single page application – A single page application hosted in Amazon S3 and CloudFront communicates with API Gateway and AWS Lambda to access application data • Web application – The web application is a general purpose event driven web application back end that uses AWS Lambda with API Gateway for its business logic It also uses DynamoDB as its database and Amazon Cognito for user management All static content is hosted using Amplify In addition to t hese two patterns this whitepaper discuss es the applicability of AWS Lambda and API Gateway to a general microservice architecture A microservice architecture is a popular pattern that although not a standard three tier architecture involves decoupling application components and deploying them as stateless individual units of functionality that communicate with each other Amazon Web Services AWS Serverless Multi Tier Architectures Page 16 Mobile backend Architectural pattern for serverless mobile backend Amazon Web Services AWS Serverless Multi Tier Architectures Page 17 Table 1 Mobile backend tier components Tier Components Presentation Mobile application running on a user device Logic API Gateway with AWS Lambda This architecture shows three exposed services (/tickets /shows and /info ) API Gateway endpoints are secured by Amazon Cognito user pools In this method users sign in to Amazon Cognito user pools (using a federated third party if necessary) and receive access and ID tokens that are used to authorize API Gateway calls Each Lambda function is assigned its own Identity and Access Management (IAM) role to provide access to the appropriate data source Data DynamoDB is use d for the /tickets and /shows services Amazon RDS is used for the /info service This Lambda function retrieves Amazon RDS credentials from Secrets Manager and uses an elastic network interface to access the private subnet Single page application Architectural pattern for serverless single page application Amazon Web Services AWS Serverless Multi Tier Architectures Page 18 Table 2 Single page application components Tier Components Presentation Static website content is hosted in Amazon S3 and distributed by CloudFront AWS Certificate Manager allows a custom SSL/TLS certificate to be used Logic API Gateway with AWS Lambda This architecture shows three exposed services ( /tickets /shows and /info ) API Gateway endpoints are secured by a Lambda authorizer In this method users sign in through a third party identity provider and obtain access and ID tokens These tokens are included in API Gateway calls and the Lambda authorizer validates these tokens and generates an IAM policy containing API initiation permissions Each Lambda function is assigned its own IAM role to provide access to the appropria te data source Data DynamoDB is used for the /tickets and /shows services ElastiCache is used by the /shows service to improve database performance Cache misses are sent to DynamoDB Amazon S3 is used to host static content used by the /info service Amazon Web Services AWS Serverless Multi Tier Architectures Page 19 Web application Architectural pattern for web application Table 3 Web application components Tier Components Presentation The front end application is all static content (HTML CSS JavaScript and images ) which are generated by React utilities like create react app Amazon CloudFront hosts all these objects The web application when used downloads all the resources to the b rowser and starts to run from there The web application connects to the backend calling the APIs Logic Logic layer is built using Lambda functions fronted by API Gateway REST APIs This architecture shows multiple exposed services There are multiple d ifferent Lambda functions each handling a different aspect of the application The Lambda functions are behind API Gateway and accessible using API URL paths Amazon Web Services AWS Serverless Multi Tier Architectures Page 20 Tier Components The user authentication is handled using Amazon Cognito User Pools or federated user providers A PI Gateway uses out of box integration with Amazon Cognito Only after a user is authenticated the client will receive a JSON Web Token ( JWT) which it should then use when making the API calls Each Lambda function is assigned its own IAM role to provide access to the appropriate data source Data In this particular example DynamoDB is used for the data storage but other purpose built Amazon database or storage services can be used depending o n the use case and usage scenario Microservices with Lambda Architectural pattern for microservices with Lambda The microservice architecture pattern is not bound to the typical three tier architecture; however this popular pattern can realize significant benefits from the use of serverless resources In this architecture each of the application components are decoupled and indepe ndently deployed and operated An API created with API Gateway and functions Amazon Web Services AWS Serverless Multi Tier Architectures Page 21 subsequently launch ed by AWS Lambda is all that you need to build a microservice Your team can use these services to decouple and fragment your environment to the level of gran ularity desired In general a microservices environment can introduce the following difficulties: repeated overhead for creating each new microservice issues with optimizing server density and utilization complexity of running multiple versions of multi ple microservices simultaneously and proliferation of client side code requirements to integrate with many separate services When you create microservices using serverless resources these problems become less difficult to solve and in some cases simpl y disappear The serverless microservices pattern lowers the barrier for the creation of each subsequent microservice (API Gateway even allows for the cloning of existing APIs and use of Lambda functions in other accounts) Optimizing server utilization i s no longer relevant with this pattern Finally API Gateway provides programmatically generated client SDKs in a number of popular languages to reduce integration overhead Conclusion The multi tier architecture pattern encourages the best practice of cre ating application components that are simple to maintain decouple and scale When you create a logic tier where integration occurs by API Gateway and computation occurs within AWS Lambda you realize these goals while reducing the amount of effort to achieve them Together these services provide a n HTTPS API front end for your clients and a secure environment to apply your business log ic while removing the overhead involved with managing typical server based infrastructure Contributors Contributors to this document include : • Andrew Baird AWS Solutions Architect • Bryant Bost AWS ProServe Consultant • Stefano Buliani Senior Product Manage r Tech AWS Mobile • Vyom Nagrani Senior Product Manager AWS Mobile Amazon Web Services AWS Serverless Multi Tier Architectures Page 22 • Ajay Nair Senior Product Manager AWS Mobile • Rahul Popat Global Solutions Architect • Brajendra Singh Senior Solutions Architect Further reading For additional information refer to : • AWS Whitepapers and Guides Document revisions Date Description Octo ber 20 2021 Updated for new service features and patterns June 1 2021 Updated for new service features and patterns September 25 2019 Updated for new service features November 1 2015 First publication
General
Managing_Your_AWS_Infrastructure_at_Scale
ArchivedManaging Your AWS Infrastructure at Scale Shaun Pearce Steven Bryen February 2015 This paper has been archived For the latest technical guidance on AWS Infrastructure see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers/ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 2 of 32 © 2015 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations con tractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreem ent between AWS and its customers ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 3 of 32 Contents Abstract 4 Introduction 4 Provisioning New EC2 Instances 6 Creating Your Own AMI 7 Managing AMI Builds 9 Dynamic Configuration 12 Scripting Your Own Solution 12 Using Configuration Management Tools 16 Using AWS Services to Help Manage Your Environments 22 AWS Elastic Beanstalk 22 AWS OpsWorks 23 AWS CloudFormation 24 User Data 24 cfninit 25 Using the Services Together 26 Managing Application and Instance State 27 Structured Application Data 28 Amazon RDS 28 Amazon DynamoDB 28 Unstructured Application Data 29 User Session Data 29 Amazon ElastiCache 29 System Metrics 30 Amazon CloudWatch 30 Log Management 31 Amazon CloudWatch Logs 31 Conclusion 32 Further Reading 32 ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 4 of 32 Abstract Amazon Web Services (AWS) enables organizations to deploy large scale application infrastructures across multiple geographic locations When deploying these large cloud based applications it’s important to ensure that the cost and complexity of operating such systems does not increase in direct proportion to their size This whitepaper is intended for existing and potential customers —especially architects developers and sysops administrators —who want to deploy and manage their infrastructure in a scalable and predictable way on AWS In this whitepaper we describe tools and techniques to provision new instances configur e the instances to meet your requirements and deploy your application code We also introduce strategies to ensure that your instances remain stateless resulting in an architecture that is more scalable and fault tolerant The techniques we describe allow you to scale your service from a single instance to thousand s of instances while maintaining a consistent set of processes and tool s to manage them For the purposes of this whitepaper w e assume that you have knowledge of basic scripting and core services such as Amazon Elastic Compute Cloud (Amazon EC2) Introductio n When designing and implementing large cloud based applications it’s important to consider how your infrastructure will be managed to ensure the cost and complexity of running such systems is minimiz ed When you first begin using Amazon EC2 it is easy to manage your EC2 instances just like regular virtualized servers running in your data center You can create an instance log in configure the operating system install any additional packages and install your applic ation code You can main tain the instance by installing security patches rolling out new deployments of your code and modifying the configuration as needed Despite the operational overhead you can continue to manage your instances in this way for a long time However your in stances will inevitably begin to diverge from their original specification which can lead to inconsistencies with other instances in the same environment This divergence from a known baseline can become a huge challenge when managing large fleets of instances across multiple environments Ultimately it will lead to service issues because your environments will become less predictable and more difficult to maintain The AWS platform provides you with a set of tools to address this challenge with a different approach By using Amazon EC2 and associated services you can specify and manage the desired end state of your infrastructure independently of the EC2 instances and other running components ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 5 of 32 For example with a traditional approach you would alter the configuration of an Apache server running across your web servers by logging in to each server in turn and manually mak ing the change By using the AWS platform you can take a different approach by chang ing the underlying specification of your web servers and launch ing new EC2 instances to replace the old ones This ensures that each instance remains identical; it also reduces the effort to implement the change and reduces the likelihood of errors being introduced When you start to think of yo ur infrastructure as being defined independently of the running EC2 instances and other components in your environments you can take greater advantage of the benefits of dynamic cloud environment s: • Software defined infrastructure – By defining your infrastructure using a set of software art ifacts you can leverage many of the tools and techniques that are used when developing software components This includes managing the evolution of your infrastructure in a version control system as well as using continuous integration (CI) processes to continually test and validate infrastructure changes befo re deploying them to production • Auto Scaling and selfhealing – If you automatically provision your new instances from a consistent specification you can use Auto Scaling groups to manage the number of instances in an EC2 fleet For example you can set a condition to add new EC2 instances in increments to the Auto Scaling group when the average utilization of your EC2 fleet is high You can also use Auto Scaling to detect impaired EC2 instances and unhealthy applications and replace the instances without your intervention • Fast environment provisioning – You can quickly and easily provision c onsistent environments which opens up new ways of working within your teams For example you can provision a new environment to allow testers to validate a new version of your application in their own personal test environment s that are isolated from other changes • Reduce costs – Now that you can provision environments quickly you also have the option to remove them when they are no longer needed This reduce s costs because you pay only for the resources that you use • Blue green deployments – You can deploy new versions of your application by provisioning new instances (containing a new version of the code) beside your existing infrastructure Y ou can then switch traffic between environments in an approach known as bluegreen deployments This has many benefits over traditional deployment strategies including the ability to quickly and easily roll back a deployment in the event of an issue To leverage these advantages your infrastructure must have the following capabilities: ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 6 of 32 1 New infrastructure components are automatically provisioned from a known version controlled baseline in a repeatable and predictable manner 2 All instances are stateless so that they can be removed and destroyed at any time without the risk of losing applicat ion state or system data The following figure shows the overall process: Figure 1: Instance Lifecycle and State M anagement The following sections outline tools and techniques that you can use to build a system with these capabilities By moving to an architecture where your instances can be easily provisioned and destroyed with no loss of data you can fundamentally change the way you m anage your infrastructure Ultimately you can scale your infrastructure over time without significantly increasing the operational overhead associated with it Provisioning New EC2 Instances A number of external events will require you to provision new inst ances into your environment s: • Creating new instances or replicating existing environments • Replacing a failed instance in an existing environment • Responding to a “sca le up” event to add additional instances to an Auto Scaling group • Deploying a new version of your software stack (by using bluegreen deployments ) Some of these events are difficult or even impossible to predict so it’s important that the process to create new instances into your environment is fully automated repeatable and consistent The process of automatically provisioning new instances and bringing them into service is known as bootstrapping There are multiple approaches to bootstrap ping your Amazon EC2 instances The two most popular approaches are to either create your own EC2 Instance Version Control System1 Durable Storage 2ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 7 of 32 Amazon Machine Ima ge (AMI) or to use dynamic configuration We explain both approaches in the following sections Creating Your Own AMI An Amazon Machine Image (AMI) is a template that provides all of the information required to launch an Amazon EC2 instance At a minimum it contains the base operating system but it may also include additional configuration and software You can launch multiple instances of an AMI and you can also launch different types of instances from a single AMI You have several options when launch ing a new EC2 instance : • Select an AMI provided by AWS • Select an AMI provided by the community • Select an AMI containing preconfigured software from the AWS Marketplace1 • Create a custom AMI If launch ing an instance from a base AMI containing only the operating system you can further customiz e the instance with additional configuration and software afte r it has been launched I f you create a custom AMI you can launch an instance that already contains your complete software stack thereby removing the need for a ny runtime configuration However b efore you decide whether to create a custom AMI you should understand the advantages and disadvantages Advantages of custom AMIs • Increases s peed – All configuration is packaged into the AMI itself which significantly increases the speed in which new instances can be launched This is particularly useful during Auto Scaling events • Reduce s external dependencies – Packaging everything into an AMI mean s that there is n o dependenc y on the availability of external services when launching new instances ( for example package or code repositories) • Remove s the reliance on complex configuration scripts at launch time – By preconfiguring your AMI scaling events and instance replacement s no longer rely on the successful completion of configuration scripts at launch time This reduces the likelihood of operational issues caused by erroneous scripts Disadvantages of custom AMIs 1 https://awsamazoncom/marketplace ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 8 of 32 • Loss of agility – Packaging everything into an AMI means that even simple code changes and defect fixes will require you to produce a new AMI This increase s the time it takes to develop test and release enhancements and fixes to your application • Complexity – Managing the A MI build process can be complex You need a process that enables the creation of consistent repeatable AMIs where the changes between revisions are identifiable and auditable • Runtime configuration requirements – You might need to make additional customizations to your AMIs based on runtime information that cannot be known at the time the AMI is created For example the database connection string required by your application might change depending on where the AM I is used Given the se advantages and disadvantages we recommend a hybrid approach : build static components of your stack into AMIs and configure dynamic aspects that change regularly (such as application code) at run time Consider the following factors to help you decide what configuration to include within a custom AMI and what to include in dynamic run time scripts: • Frequency of deployments – How often are you likely to deploy enhancements to your system and at what level in your stack will you make the deployments? For example you might deploy changes to your application on a daily basis but you might upgrade your JVM version far less frequently • Reduction on external dependencies – If the configuration of your system depends on other external syst ems you might decide to carry out these configuration steps as part of an AMI build rather than at the time of launching an instance • Requirements to scale quickly – Will your application use Auto Scaling groups to adjust to changes in load? If so how quickly will the load on the application increase? This will dictate the speed in which you need to provision new instances into your EC2 fleet Once you have assessed your application stack based on the preceding criteria you can decide which element s of your stack to include in a custom AMI and which will be configured dynamically at the time of launch The following figure show s a typical Java web application stack and how it could be manage d across AMIs and dynamic scripts ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 9 of 32 Figure 2: Base Foundational and Full AMI Models In the base AMI model only the OS image is maintained as an AMI The AMI can be an AWS managed image or an AMI that you manage that contains your own OS image In the foundational AMI model elements of a stack that change infrequently ( for example components such as the JVM and application server) are built into the AMI In the full stack AMI model a ll elements of the stack are built into the AMI This model is useful if your applicatio n changes infrequently or if your application has rapid auto scaling requirements (which means that dynamically installing the application isn’t feasible ) However e ven if you build your application into the AMI it still might be advantageous to dynamic ally configure the application at run time because it increases the flexibility of the AMI For example it enables you to use your AMIs across multiple environments Managing AMI Builds Many people start by manually configur ing their AMIs using a process similar to the following : 1 Launch the latest version of the AMI 2 Log in to the instance and manually reconfigure it (for example by making package updates or installing new application s) 3 Create a new AMI based on the running instance EC2 InstanceOSJVM OS Users & GrpsTomcatApacheApp FrameworksApplication Code Base AMI Bootstrapping CodeApp Config EC2 InstanceOSJVM OS Users & GrpsTomcatApacheApp FrameworksApplication Code Foundational AMI Bootstrapping CodeApp Config EC2 InstanceOSJVM OS Users & GrpsTomcatApacheApp FrameworksApplication Code Full stack AMI Bootstrapping CodeApp ConfigArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 10 of 32 Although this manual process is sufficient for simple applications it is difficult to manage in more complex environments where AMI updates are needed regularly It’s essential to have a consistent repeatable process to create your AMIs It’s also important to be able to audit what has changed between one version of your AMI and another One way to achieve this is to manage the customization of a base AMI by using automated scripts You can develop your own scripts or you can use a configuration management tool For more information about configuration management tools see the Using Configuration Management Tools section in this whitepaper Using automated scripts has a number of advantages over the manual method Automat ion significantly speed s up the AMI creation process In addition you can use version control for your scripts/configuration files which results in a repeatable process where the change between AMI versions is transparent and auditable This automated process is similar to the manual process: 1 Launch the latest version of the AMI 2 Execute the automated configuration using your tool of choice 3 Create a new AMI image based on the running instance You can use a third party tool such as Packer 2 to help automat e the process However many find that this approach is still too time consuming for an environment with multiple frequent AMI builds across multiple environments If you use the Linux operating system you can reduce the time it takes to create a new AMI by customi zing an Amazon Elastic Block Store (Amazon EBS) volume rather than a running instance An Amazon EBS volume is a durable block level storage device that you can attach to a single Amazon EC2 instance It is possible to creat e an Amazon EBS volume from a base AMI snapshot and customise this volume before storing it as a new AMI This replaces the time taken to initializ e an EC2 instance with the far shorter time needed to create and attach an EBS volume In addition this approach makes use of the incremental nature of Amazon EBS snapshots An EBS snapshot is a point intime backup of an EBS volume th at is stored in Amazon S3 Snapshots are incremental backups meaning that only the blocks on the device that have changed after your most recent snapshot are saved For example i f a configuration update changes only 100 MB of the blocks on an 8 GB EBS volume only 100 MB will be stored to Amazon S3 2 https://packerio ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 11 of 32 To achieve this you need a long running EC2 instance that is responsible for attaching a new EBS volume based on the latest AMI build executing the scripts needed to customiz e the volume creating a snapshot of the volume and registering the snapshot as a new version of your AMI For example Netflix uses t his technique in their open source tool called aminator 3 The following figure shows this process Figure 3: Using EBS Snapshots to Speed Up D eployments 1 Create the volume from the latest AMI snapshot 2 Attach the volume to the instance responsible for building new AMIs 3 Run automated provisioning scripts to update the AMI configuration 4 Snapshot the volume 5 Register the snapshot as a new version of the AMI 3 https://githubcom/Netflix/aminator ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 12 of 32 Dynamic Configuration Now that you have decided what to include into your AMI and what should be dynamically configured at run time you need to decide how to complete th e dynamic configuration and bootstrapping process There are many tools and techniques that you can use to configure your instances ranging from simple scripts to complex centralized configuration management tools Scripting Your Own Solution Depending on how much pre configuration has been included into your AMI you might need only a single script or set of scripts as a simple elegant way to configure the final elements of your application stack User Data and cloudinit When you launch a ne w EC2 instance by using either the AWS Management Console or the API you have the option of passing u ser data to the instance You can retrieve the user data from the instance through the EC2 m etadata service and use it to perform automated tasks to conf igure instances as they are first launched When a Linux instance is launched the initialization instructions passed into the instance by means of the user data are executed by using a technology called cloudinit The cloudinit package is an open source application built by Canonical It’s included in many base Linux AMIs (to find out if your distribution supports cloudinit see the distribution specific documentation) Amazon Linux a Linux distribution created and maintained by AWS contains a customized version of cloudinit You can pass two types of user data either shell scripts or cloudinit directives to cloudinit running on your EC2 instance For example the following shell script can be passed to an instance to update all installed p ackages and to configure the instance as a PHP web server : #!/bin/sh yum update y yum y install httpd php php mysql chkconfig httpd on /etc/initd/httpd start The following user data achieve s the same result but us es a set of cloudinit directives: #cloudconfig ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 13 of 32 repo_update: true repo_upgrade: all packages: httpd php phpmysql runcmd: service httpd start chkconfig httpd on AWS Windows AMIs contain an additional service EC2Config that is installed by AWS The EC2Config service performs tasks on the instance such as activating Windows setting the Administrator password writing to the AWS console and performing one click sysprep from within the application If launching a Windows instance the EC2Config service can also execut e scripts passed to the instance by means of the user data The data can be in the form of commands that you run at the cmdexe prompt or Windows PowerShell prompt This approach work s well for simple use cases However as the number of instance roles (web d atabase and so on) grows along with the number of environments that you need to manage your scripts m ight become large and difficult to maintain Additionally user data is limited to 16 KB so if you have a large number of con figuration tasks and associated logic we recommend that you use the user data to download additional scripts from Amazon S3 that can then be executed Leveraging EC2 Metadata When you configur e a new instance you typically need to understand the context in which the instance is being launched For example you m ight need to know the hostname of the instance or which region or Availability Zone the instance has been launched into The EC2 metadata service can be queried to provide such contextual information about an instance as well as retrieving the user data To access the instance metadata from within a running instance you can make a standard HTTP GET using tools such as cURL or the GET command For example to retrieve the host name of the instance you can make an HTTP GET request to the following URL: http://169254169254/latest/meta data/hostname ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 14 of 32 Resource Tagging To help you manage your EC2 resources you can assign your own metadata to each instance in addition to the EC2 metadata that is used to define hostnames Availability Zones and other resources You do this with tags Each tag consists of a key and a value both of which you define when the instance is launched You can use EC2 tags to define further context t o the instance being launched For example you can tag your instances for different environments and roles as shown in the following figure Figure 4: Example of E C2 Tag U sage As long as your EC2 instance has access to the Internet these tags can be retrieved by using the AWS Command Line Interface (CLI) within your bootstrapping scripts to configure your instances based on their role and the environment in which they are being launched Putting it all Together The following figure shows a typical boo tstrapping process using user data and a set of configuration scripts hosted on Amazon S3 i1bbb2637environment = production role = web if2871adeenvironment = dev role = app Key ValueArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 15 of 32 Figure 5: Example of an End toEnd W orkflow This example uses the user data as a lightweight mechanism to download a base configuration script from Amazon S3 The script is responsible for configuring the system to a baseline across all instances regardless of role and environment (for example the script m ight install monitoring agents and ensure that the OS is patched ) This base configuration script use s the CLI to retrieve the instances tags Based on the value of the “role” tag the script download s an additional overlay script responsible for the additional configuration required for the instance to perform its specific role ( for example installing Apache on a web server) Finally the script use s the instances “environment” tag to download an appropriate environment overlay script to carry out the EC2 API Amazon EC2 Instance Amazon S3 BucketBase ConfigurationUser Data Server Role Overlay Scripts Environment Overlay ScriptsRetrieve and process User Data Download base config and executeEC2 Metadata Service Retrieve server role from EC2 API download and execute appropriate script Retrieve server environment from EC2 API download and execute appropriate script Bootstrap CompleteReceive user data and expose via metadata service describetags describetagsInstance Launch RequestArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 16 of 32 final configuration for the environment the instance resides in ( for example setting log levels to DEBUG in the development environment) To protect sensitive information that m ight be contained in your scripts you should restrict access to these assets by using IAM Roles 4 Using Configuration Management Tools Although scripting your own solution works it can quickly become complex when managing large environments It also can become difficult to govern and audit your environment such as identifying change s or troubleshoot ing configuration issues You can address some of these issues by using a configuration management tool to manage instance configurations Configuration management tools allow you to define your environment ’s configuration in code typically by using a domain specific language These domain specific languages use a declarative approach to code where the code describes the end state and is not a script that can be executed Because the environment is defined using code you can track changes to the configuration and apply version control Many configuration management tools also offer additional features such as compliance auditing and search Push vs Pull Models Configuration management tools typically leverage one of two models push or pull The model used by a tool is defined by how a node (a target EC2 instance in AWS) interacts with the master configuration management server In a push model a master configuration management server is aware of the nodes that it needs to manage and pushes the configuration to them remotely These nodes need to be pre registered on the master server Some push tools are agentless and execute configuration remotely using existing protocols such as SSH Others push a package which is then executed locally using an agent The push model typi cally has some constraints when working with dynamic and scalable AWS resources: • The master server needs to have information about the nodes that it needs to manage When you use tools such as Auto Scaling where nodes might come and go this can be a challenge • Push systems that do remote execution do not scale as well as systems where configuration changes are offloaded and executed locally on a node In large 4 http://docsaws amazoncom/AWSEC2/latest/UserGuide/iam roles foramazon ec2html ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 17 of 32 environments the master server m ight get overloaded when config uring multiple systems in parallel • Connecting to nodes remotely requires you to allow specific ports to be allowed inbound to your nodes For some remote execution tools this includes remote SSH The second model is the pull model Configuration management tools that use a pull system use an agent that is installed on a node The agent asks the master server for configuration A node can pull its configuration at boot time or agents can be daemonized to poll the master periodically for configuration changes Pull systems are especially useful for managing dynamic and scalable AWS environments Following are the main benefits of the pull model : • Nodes can scale up and down easily as the master does not need to know they exist before they can be configured Nodes can simply register themselves with the server • Configuration management masters require less scaling when using a pull system because all processing is offloaded and executed locally on the remote node • No specifi c ports need to be opened inbound to the nodes Most tools allow the agent to communicate with the master server by using typical outbound ports such as HTTPS Chef Example Many configuration management tools work with AWS Some of the most popular are Chef Puppet Ansible and SaltStack For our example in this section we use Chef to demonstrate bootstrapping with a configuration management tool You c an use other tools and apply the same principles Chef is an open source configuration management tool that uses an agent (chef client) to pull configuration from a master server (Chef server) Our example shows how to bootstrap nodes by pulling configuration from a Chef server at boot time The example is based on the following assumptions: • You have configured a Chef server • You have an AMI that has the AWS command line tools installed and configured • You have the chefclient installed and included into your AMI First let’s look at what w e are going to configure within Chef We’ll create a simple Chef cookbook that installs an Apache web server and deploys a ‘Hello World’ site A C hef cookbook is a collection of recipes; a recipe is a definition of resources that should be configured on a node This can include files packages permissions and more The default recipe for this Apache cookbook might look something like this: ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 18 of 32 # # Cookbook Name:: apache # Recipe:: default # # Copyright 2014 YOUR_COMPANY_NAME # # All rights reserved Do Not Redistribute # package "httpd" #Allow Apache to start on boot service "httpd" do action [:enable :start] end #Add HTML Template into Web Root template "/var/www/html/indexhtml" do source "indexhtmlerb" mode "0644" end In this recipe we install enable and start the HTTPD (HTTP daemon) service Next w e render a template for indexhtml and place it into the /var/www/html directory The indexhtmlerb template in this case is a very simple HTML page : <h1>Hello World</h1> Next the cookbook is uploaded to the Chef server Chef offers the a bility to group cookbooks into r oles Roles are useful in large scale environment s where servers within your environment m ight have many different r oles and cookbooks might have overlapping roles In our example w e add this cookbook to a role called ‘webserver’ Now when we launch EC2 instances (nodes) we can provide EC2 user data to bootstrap them by using Chef To make this as dynamic as possible we can use an EC2 tag to define which Chef role to apply to our node This allows us to use the same user data script for all nodes whichever role is intended for them For example a web server and a database server can use the same user data if you assign different values to the ‘role’ tag in EC2 We also need to consider how our new instance will authenticate with th e Chef server We can store our private key in an encrypted Amazon S3 bucket by using Amazon S3 ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 19 of 32 server side encryption5 and we can restrict access to this bucket by using IAM r oles The key can then be used to authenticate with the Chef ser ver The chef client uses a validatorpem file to authenticate to the Chef server when registering new nodes We also need to know which Chef server to pull our configuration from W e can store a prepopulated clientrb file in Amazon S3 and copy this within our user data script You might want to dynamically populate this clien trb file depending on environment but for our example we assume that we have only one Chef server and that a pre populated clientrb file is sufficient You could also include these two files into your custom AMI build The user data would look like this: #!/bin/bash cd /etc/chef #Copy Chef Server Private Key from S3 Bucket aws s3 cp s3://s3 bucket/orgname validatorpem orgname validatorpem #Copy Chef Client Configuration File from S3 Bucket aws s3 cp s3://s3 bucket/clientrb clientrb #Change permiss ions on Chef Server private key chmod 400 /etc/chef/orgname validatorpem #Get EC2 Instance ID from the Meta Data Service INSTANCE_ID =`curl s http://169254169254/latest/meta data/instance id` #Get Tag with Key of ‘role’ for this EC2 instance ROLE_TAG=$(aws ec2 describe tags filters "Name=resource idValues=$ INSTANCE_ID " "Name=keyValues=role" output text) #Get value of Tag with Key of ‘role’ as string ROLE_TAG_VALUE=$(echo $ROLE_TAG | awk 'NF>1{print $NF}') #Create first_bootjson file dynamically adding the tag value as the chef role in the run list echo "{\ "run_list\ ":[\"role[$ROLE_TAG_VALUE] \"]}" > first_bootjson 5 http://docsawsamazoncom/AmazonS3/latest/dev/UsingServerSideEncryptionhtml ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 20 of 32 #execute the chef client using first_bootjson config chefclient j first_bootjson #daemonize the chef client to run every 5 minutes chefclient d i 300 s 30 As shown i n the preceding user data example we copy our client configuration files from a private S3 bucket We then use the EC2 metadata service to get some information about the instance ( in this example Instance ID) Next we query the Amazon EC2 API for any tags with the key of ‘role ’ and dynamically configure a Chef run list with a C hef role of this value Finally we execute the first chef client run by providing the first_bootjson options which include our new run list We then execute chef client once more ; however this time we execute it in a daemonized setup to pull configuration every 5 minutes We now have some re usable EC2 user data that we can apply to any new EC2 instances As long as a ‘role’ tag is provided with a value that matches a role on the target Chef server the instance will be configured using the corresponding Chef cookbooks Putting it all Together The following figure shows a typical workflow from instance laun ch to a fully configured instance that is ready to serve traffic ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 21 of 32 Figure 6: Example of an End toEnd W orkflow EC2 APIEC2 API Amazon EC2 Instance Amazon S3 BucketUser Data Chef config filesRetrieve and process User Data Download private key and clientrb from S3 bucketEC2 Metadata Service Retrieve server role from EC2 API Configure first_bootson to use chef role with tag value Bootstrap CompleteReceive user data and expose via metadata service describetags describetagsInstance Launch Request Pull Config from Chef Server and configure instanceArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 22 of 32 Using AWS Services to Help Manage Your Environments In the preceding sections we discussed tools and techniques that systems administrators and developers can use to provision EC2 instances in a n automated predictable and repeatable manner AWS also provides a range of application management services that help make this proces s simpler and more productive The following figure shows how to sele ct the right service for your application based on the level of control that you require Figure 7: AWS Deployment and Management Services In addition to provisioning EC2 instances these services can also help you to provision any other associated AWS components that you need in your systems such as Auto Scaling groups load balancers and networking components We provide more information about how to use these services in the following sections AWS Elastic Beanstalk AWS Elastic Beanstalk allows web developers to easily upload code without worrying about managing or implementing any underlying infrastructure components Elastic Beanstalk takes care of deployment capacity provisioning load balancing auto scaling and application health monitoring I t is worth noting that Elastic Beanstalk is not a black box service: You have full visibility and control of the underlying AWS resources that are deployed such as EC2 instances and load balancers Elastic Beanstalk supports deployment of Java NET Ruby PHP Python Nodejs and Docker on familiar servers such as Apache Nginx Passenger and IIS Elastic Beanstalk provides a default configuration but you can extend the configuration as needed For example you m ight want to install additional packages from a yum repository or copy configuration files that your application depends on such as a replacement for httpdconf to override specific settings ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 23 of 32 You can write the c onfiguration files in YAML or JSON format and create the files with a config file e xtension You then place the files in a folder in the application root named ebextensions You can use c onfiguration files to manage packages and services work with files and execute commands For more information about using and extending Elastic Beanstalk see AWS Elastic Beanstalk Documentation 6 AWS OpsWorks AWS OpsWorks is an application management service that makes it easy to deploy and manage any applic ation and its required AWS resources With AWS OpsWorks you build application stacks that consist of one or many layers You configure a layer by using an AWS OpsWorks configuration a custom configuration or a mix of both AWS OpsWorks uses Chef the open source configuration managem ent tool to configure AWS r esources This gives you the ability to provide your own custom or community Chef recipes AWS OpsWorks features a set of lifecycle events —Setup Configure Deploy Undeploy and Shutdown —that automatically run the appropriate recipes at the appr opriate time on each instance AWS OpsWorks provides some AWS managed layers for typical application stacks These layers are open and customi zable which mean s that you can add additional custom recipes to the layers provided by AWS OpsWorks or create custom layers from scratch using your existing recipes It is important to ensure that the correct recipes are associated with the correct lifecycle events Lifecycle events run during the following times: • Setup – Occurs on a new instance after it successfully boots • Configure – Occurs on all of the stack’s instances when an instance enters o r leaves the online state • Deploy – Occurs when you deploy an app • Undeploy – Occurs when you delete an app • Shutdown – Occurs when you stop an instance For example the c onfigure event is useful when building distributed systems or for any system that needs to be aware of when new instances are added or removed from the stack You c ould use this event to update a load balancer when new web servers are added to the stack 6 http://awsamazoncom/documentation/elastic beanstalk/ ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 24 of 32 In addition to typical server configuration AWS OpsWorks manages application deployment and integrates with your application’s code repository This allows you to track application versions and rollback deployments if needed For mo re information about AWS OpsWorks see AWS OpsWorks Documentation 7 AWS CloudFormation AWS CloudFormation gives developers and systems administrators an eas y way to create and manage a collection of related AWS resources provisioning and updating them in an orderly and predictable fashion Compared to Elastic Beanstalk and AWS OpsWorks AWS CloudFormation gives you the most control and flexibility when provisioning resources AWS CloudFormation allows you to manage a broad set of AWS resources For the purposes of this white paper we focus on the features that you can use to bootstrap your EC2 instances User Data Earlier in this whitepaper we described t he process of using user data to configure and customize your EC2 instances (see Scripting Your Own Solution ) You also can include user data in a n AWS CloudFormation template which is executed on the instance once it is created You can include u ser data when specifying a single EC2 instance as well as when specifying a launch configuration The following example shows a launch configuration that provision s instances configured to be PHP web server s: "MyLaunchConfig" : { "Type" : "AWS::AutoScaling::LaunchConfiguration" "Properties" : { "ImageId" : "i 123456" "SecurityGroups" : "MySecurityGroup" "InstanceType" : "m3medium" "KeyName" : "MyKey" "UserData": {"Fn::Base64": {"Fn::Join":[""[ "#!/bin/bash \n" "yum update y\n" "yum y install httpd php php mysql\n" "chkconfig httpd on \n" "/etc/initd/httpd start \n" ]]}} 7 http://awsamazoncom/documentation/opsworks/ ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 25 of 32 } } cfninit The cfninit s cript is an AWS CloudFormation helper scri pt that you can use to specify the end state of an EC2 instance in a more declarative manner The cfninit script is installed by default on Amazon Linux and AWS supplied Windows AMIs Administrators can also install cfninit on other Linux distributions and then include this into their own AMI if needed The cfninit script parses metadata from the AWS CloudFormation template and uses the metadata to customiz e the instance accordingly The cfninit script can do the followin g: • Install packages from packa ge repositories ( such as yum and aptget) • Download and unpack archives such as zip and tar files • Write files to disk • Execute arbitrary commands • Create users and groups • Enable /disable and start/stop services In an AWS CloudFormation template t he cfninit helper script is called from the user data Once it is called it will inspect the metadata associated with the resource passed into the request and then act accordingly For example you can use the following launch configuration metadata to instruct cfn init to configure an EC2 instance to become a PHP web server (similar to the preceding user data example): "MyLaunchConfig" : { "Type" : "AWS::AutoScaling::LaunchConfiguration" "Metadata" : { "AWS::CloudFormation::Init" : { "config" : { "packages" : { "yum" : { "httpd" : [] "php" : [] "phpmysql" : [] } } "services" : { "sysvinit" : { "httpd" : { ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 26 of 32 "enabled" : "true" "ensureRunning" : "true" } } } } } } "Properties" : { "ImageId" : "i 123456" "SecurityGroups" : "MySecurityGroup" "InstanceType" : "m3medium" "KeyName" : "MyKey" "UserData": {"Fn::Base64": {"Fn::Join":[""[ "#!/bin/bash \n" "yum update y awscfnbootstrap\ n" "/opt/aws/bin/cfn init stack " { "Ref" : "AWS::StackId" } " resource MyLaunchConfig " " region " { "Ref" : "AWS::Region" } " \n" ]]}} } } For a detailed walkthrough of bootstrapping EC2 instances by using AWS CloudFormation and its related helper scripts see the Bootstrapping Applications via AWS CloudFormation whitepaper8 Using the Services Together You can use the services separately to help you provision new i nfrastructure components but you also can combine them to create a single solution This approach has clear advantages For example you can model an entire architecture including networking and database configurations directly into a n AWS CloudFormation template and then deplo y and manage your application by using AWS Elastic Beanstalk or AWS OpsWorks This approach unifies resource and application management making it easier to apply version control to your entire architecture 8 https://s3amazonawscom/cloudformation examples/BoostrappingApplicationsWithAWSCloudFormationpdf ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 27 of 32 Managing Application and Instance State After you implement a suitable process to a utomatically provision new infrastructure components your system will have the capability to create new EC2 instances and even entire new environments in a quick repeatable and predictable manner However in a dynamic cloud environment you will also need to consider how to remove EC2 instances from your environments and what impact this might have on the service that you provide to your users There are a number of reasons why an instance might be removed from your system: • The instance is terminated as a result of a hardware or software failure • The instance is terminated as a response to a “scale down ” event to remove instances from an Auto Scaling group • The instance is terminated because you’ve deployed a new version of your software stack by using bluegreen deployments (instances running the older version of the application are terminated after the deployment) To handle the removal of instance s without impacting your service you need to ensure that your application instances are stateless This means that all system and application state is stored and managed outside of the instances themselves There are many forms of system and application state that you need to consider when designing your system as shown in the following table State Examples Structured application data Customer orders Unstructured application data Images and documents User session data Position in the app; contents of a shopping cart Application and system logs Access logs; security audit logs Application and system metrics CPU load; network utilization Running stateless application instances means that no instance in a fleet is any different from its counterparts This offers a number of advantages: • Providing a robust service – Instances can serve any request from any user at any time I f an instance fails subsequent requests can be routed to alternative instance s while the failed instance is replaced This can be achieved with no interruption to service for any of you r users • Quicker less complicated bootstrapping – Because your instances don’t contain any dynamic state your bootstrapping process needs to concern itself only with provision ing your system up to the application layer There is no need to try to ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 28 of 32 recover state and data which is often large and therefore can significantly increase bootstrapping times • EC2 instances as a unit of deployment – Because all state is maintained off of the EC2 instances themselves you can replace the instance s while orchestrating application deployments This can simplify your deployment processes and allow new deployment techniques such as bluegreen deployments The following section describes each form of application and instance state and outlines some of the tools and techniques that you can use to ensure it is store d separately and independently from the application instances themselves Structured Application Data Most applications produce structured textual data such as customer orders in an order management system or a list of web pages in a CMS In most cases this kind of content is best stored in a database Depending on the structure of th e data and the requirements for acce ss speed and concurrency you m ight decide to use a relational databas e management system or a NoSQL data s tore In either case it is important to store this content in a durable highly available system away from the instances running your application This will ensure that the service you provide your users will not be interrupted or their data lost even in the event of an instance failure AWS offers both relational and NoSQL managed databases that you can use as a persistence layer for your applications We discuss these database options in the following sections Amazon RDS Amazon Relational Database Service (Amazon RDS) is a web service that makes it easy to set up operate and scale a relational database in the cloud It allows you to continue to work with the relational database engines you’re familiar with including MySQL Oracle Microsoft SQL Server or PostgreSQL This means that the code applications and operational tools that you are already using can be used with Amazon RDS Amazon RDS also handles time consuming database man agement tasks such as data backups recover y and patch management which frees your database administrators to pursue higher value application development or database refinements In addition Amazon RDS Multi AZ deployments increase your database availability and protect your da tabase against unplanned outages This give s your service an additional level of resiliency Amazon DynamoDB Amazon Dynamo DB is a fully managed NoSQL database service offering both document (JSON) and key value data models DynamoDB has been designed to provide consistent single digit m illisecond latency at any scale making it ideal for high ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 29 of 32 traffic applications with a requirement for low latency data access DynamoDB manage s the scaling and partitioning of infrastructure on your behalf When you creat e a table you specify how much request capacity you require If your throughput requirements change you can update this capacity as needed with no impact on service Unstructured Application Data In addition to the structured data created by most appli cations some systems also have a requirement to receive and store unstructured resources such as documents images and other binary data For example t his might be the case in a CMS where an editor upload s images and PDFs to be hosted on a website In most cases a database is not a suitable storage mechanism for this type of content Instead you can use Amazon Simple Storage Service (Amazon S3) Amazon S3 provides a highly available and durable object st ore that is well suited to storing this kind of data Once your data is stored in Amazon S3 you have the option of serving these files directly from Amazon S3 to your end users over HTTP(S) bypassing the need for these requests to go to your application instances User Session Data Many applications produce information associated with a user ’s current position within an application For example as user s browse an e commerce site they m ight start to add various items into their shopping basket This information is known as session state It would be frustrating to users if the items in their baskets disappeared without notice so it’s important to store th e session state away from the application instances themselves This ensure s that baskets remain populated even if users ’ requests are directed to an alternative instance behind your load balancer or if t he current instance is removed from service for any reason The AWS platform offers a number of services that you can use to provide a highly available session store Amazon ElastiCache Amazon ElastiCache makes it easy to deploy operate and scale an in memory data store in AWS Inmemory data store s are ideal for storing transient session data due to the low latency these technologies offer ElastiCache supports two open source in memory caching engines: • Memcached – A widely adopted memory object caching system ElastiCache is protocol compliant with Memcached which is already supported by many open source applications as an in memory sessio n storage platform ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 30 of 32 • Redis – A popular open source inmemory key value store that supports data structures such as sorted sets and lists ElastiCache supports master/ slave replication and Multi AZ which you can use to achieve cross AZ redundancy In addition to the in memory data stores offered by Memcached and Redis on ElastiCache some applications require a more durable storage platform for their session data For these applications Amazon DynamoDB offers a low latency highly scalable and durable solution DynamoDB replicates data across three facilities in an AWS region to provide fault tolerance in the event of a server failure or Availability Zone outage To help customers easily integrate DynamoDB as a session store within their applications AWS provides pre built DynamoDB session handlers for both Tomcat based Java applications9 and PHP applications 10 System Metrics To properly support a production system operational teams need access to system metrics that indicate the overall health of the system and the relative load under which it’s currently operating In a traditional environment this information is often obtained by logging into one of the instances and looking at OS level metrics such as system load or CPU utilization However in an environment where you have multiple instances running and these instances can appear and disappear at any moment this approach soon becomes ineffective and difficult to manage Instead you should push this data to an external monitoring system for collection and analysis Amazon CloudWatch Amazon CloudWatch is a fully managed monitoring service for AWS resources and the applications that you run on top of them You can use Amazon CloudWatch to collect and store metrics on a durable platform that is separate and independent from your own infrastructure This means that the metrics will be available to your operational teams even when the instances themselves have been terminated In addition to tracking metrics you can use Amazon CloudWatch to trigger alarms on the metrics when they pass certain thresholds You can use the alarms to notify your teams and to initiat e further automated actions to deal with issues and bring your system back within its normal operating tolerances For example an automated action could initiate an Auto Scaling policy to increase or decrease the number of instances in an Auto Scaling group 9 http://docsawsamazoncom/AWSSdkDocsJava/latest/DeveloperGuide/java dgtomcat session managerhtml 10 http://docsawsamazoncom/aws sdkphp/guide/latest/feature dynamodb session handlerhtml ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 31 of 32 By default Amazon CloudWatch can monitor a broad range of metrics across your AWS resources That said it is also important to remember that AWS doesn’t have access to the OS or applicatio ns running on your EC2 instances Because of this Amazon CloudWatch cannot automatically monit or metrics that are accessible only within the OS such as memory and disk v olume utilization If you w ant to monitor OS and application metrics by using Amazon CloudWatch you can publish your own metrics to CloudWatch through a simple API request With t his approach you can manage these metrics in the same way that you manage other native metrics including configuring alarms and associated actions You can use the EC2Config service11 to push additional OS level operating metrics into CloudWatch without the need to manually code against the CloudWatch APIs If you are running L inux AMIs you can use the set of sample Perl scripts12 provided by AWS that demonstrate how to produce and consume Amazon CloudWatch custom metrics In addition to CloudWatch you can use third party monitoring solutions in AWS to extend your monitoring capabilities Log Management Log data is used by your operational team to better understand how the system is performing and to diagnose any issues that might arise Log data can be produced by the application itself but also by system components lower down in your stack This might include anything from access logs produced by your w eb server to security audit logs produced by the operating system itself Your operations team need s reliable and timely access to these logs at all times regardless of whether the instance that originally produced the log is still in existence For this reason it’s important to move log data from the instance to a mor e durable storage platform as close to real time as possible Amazon CloudWatch Logs Amazon CloudWatch Logs is a service that allows you to quickly and easily move your system and applicati on logs from the EC2 instances them selves to a centrali zed durable storage platform ( Amazon S3) This ensures that this data is available even when the instance itself has been terminated You also have control over the log retention policy to ensure that all logs are retained for a specified period of time The CloudWat ch Logs service provides a log management agent that you can install onto your EC2 instances to manage the ingestion of your logs into the log management service 11 http://docsawsamazoncom/AWSEC2/latest/WindowsGuide/UsingConfig_Wi nAMIhtml 12 http://docsawsamazoncom/AmazonCloudWatch/latest/DeveloperGuide/mon scripts perlhtml ArchivedAmazon Web Services – Managing Your AWS Infrastructure at Scale February 2015 Page 32 of 32 In addition to moving your logs to durable storage the CloudWatch Logs service also allows you to monitor your logs in near real time for specific phrases values or patterns (metrics) You can use t hese metrics in the same way as any other CloudWatch metric s For example you can create a CloudWatch alarm on the number of errors being thrown by your application or when certain suspect actions are detected in your security audit logs Conclusion This whitepaper showed you how to accomplish the following: • Quickly provision new infrastructure components in an automated repeatable and predictable manner • Ensure that no EC2 instance in your environment is unique and that all instances are stateless and therefore easily replaced Having these capabilities in place allows you to think differently about how you provision and manage infrastructure components when compared to traditional environments Instead of manually building each instance and maintaining consistency through a set of operational checks and balances you can treat your infrastructure as if it w ere software By specifying the desired end state of your infrastructure through the software based tools and process es described in this whitepaper you can fundamentally change the way your infrastructure is managed and you can take full advantage of the dynamic elastic and automated nature of the AWS cloud Further Reading • AWS Elastic Beanstalk Documentation • AWS OpsWorks Documentation • Bootstrapping Applications via AWS CloudFormation whitepaper • Using Chef with AWS CloudFormation • Integrating AWS CloudFormation with Puppet
General
Microservices_on_AWS
ArchivedImplementing Microservice s on AWS First Published December 1 2016 Updated Novembe r 9 2021 This version has been archived For the latest version of this document refer to https://docsawsamazoncom/whitepapers/latest/ microservicesonaws/microservicesonawspdfArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not pa rt of nor does it modify any agreement between AWS and its customers © 2021 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Introduction 5 Microservices architecture on AWS 6 User interface 6 Microservices 7 Data store 9 Reducing operational complexity 10 API implementation 11 Serverless microservices 12 Disaster recovery 14 Deploying Lambda based applications 15 Distributed systems components 16 Service discovery 16 Distributed data management 18 Config uration management 21 Asynchronous communication and lightweight messaging 21 Distributed monitoring 26 Chattiness 33 Auditing 34 Resources 37 Conclusion 38 Document Revisions 39 Contributors 39 ArchivedAbstract Microservices are an architectural and organizational approach to software development created to speed up deployment cycles foster innovation and ownership improve maintainability and scalability of software applications and scale organizations deliver ing software and services by using an agile approach that helps teams work independently With a microservices approach software is composed of small services that communicate over well defined application programming interface s (APIs ) that can be deploye d independently These services are owned by small autonomous teams This agile approach is key to successfully scale your organization Three common patterns have been observe d when AWS customers build microservices: API driven event driven and data str eaming This whitepaper introduce s all three approaches and summarize s the common characteristics of microservices discuss es the main challenges of building microservices and describe s how product teams can use Amazon Web Services (AWS) to overcome these challenges Due to the rather involved nature of various topics discussed in this whitepaper including data store asynchronous communication and service discovery the reader is encouraged to consider specific requirements and use cases of their applications in addition to the provided guidance prior to making architectural choices ArchivedAmazon Web Services Implementing Microservices on AWS 5 Introduction Microservices architectures are not a completely new approach to software engineering but rather a combination of various successful and proven concepts such as: • Agile software development • Service oriented architectures • APIfirst design • Continuous integration/ continuous delivery (CI/CD) In many cases design patterns of the Twelve Factor App are used for microservices This whitepaper first describe s different aspects of a highly scalable fault tolerant microservices architecture (user interface microservices implementation and data store) and how to build it on AWS using container technologies It then recommend s the AWS services for implementing a typical serverless microservices architecture to reduce operational complexity Serverless is defined as an operational model by the following tenets: • No infrastructure to provision or manage • Automatically scaling by unit of consumption • Pay for value billing model • Builtin availability and fault tolerance Finally th is whitepaper covers the overall system and discusses the cross service aspects of a microservices architecture such as distributed monitoring and auditing data consistency and asynchronous communication This whitepaper only focus es on workloads running in the AWS Cloud It doesn’t cover hybrid scenarios or migration strategies For more information about migration refer to the Container Migrat ion Methodology whitepaper ArchivedAmazon Web Services Implementing Microservices on AWS 6 Microservices architecture on AWS Typical monolithic applications are built using different layers —a user interface (UI) layer a business layer and a persistence layer A central idea of a microservices architecture is to split functionalities into cohesive verticals —not by technological layers but by implementing a specifi c domain The following f igure depicts a referen ce architecture for a typical microservices application on AWS Typical microservices application on AWS User interface Modern web applications often use JavaScript frameworks to implement a single page application that communicates with a representational state transfer (REST) or RESTful ArchivedAmazon Web Services Implementing Microservices on AWS 7 API Static web content can be served using Amazon Simple Storage Service (Amazon S3) and Amazon CloudFront Because clients of a microservice are served from the closest edge location and get responses either from a cache or a proxy server with optimized connections to the origin latencies can be significantly reduced However microservices running close to each other don’t benefit from a content delivery network In some cases this approach might actually add additional latency A best practice is to implement other caching mechanisms to reduce chattiness and minimize latencies For more information refer to the Chattiness topic Microservices APIs are the front door of microservices which means that APIs serve as the entry point for applications logic behind a set of programmatic interfaces typically a REST ful web services API This API accepts and proces ses calls from clients and might implement functionality such as traffic management request filtering routing caching authentication and authorization Microservices implementation AWS has integrated building blocks that support the development of microservices Two popular approaches are using AWS Lambda and Docker containers with AWS Fargate With AWS Lambda you upload your code and let Lambda take care of everything required to run and scale the implementatio n to meet your actual demand curve with high availability No administration of infrastructure is needed Lambda supports several programming languages and can be invok ed from other AWS services or be called directly from any web or mobile application One of the biggest advantages of AWS Lambda is that you can move quickly: you can focus on your business logic because security and scaling are managed by AWS Lambda’s opinionated approach drives the scalable platform A common approach to reduce operational efforts for deployment is container based deployment Container technologies like Docker have increased in popularity in the last few years due to benefits like portability productivity and efficiency The learning curve with containers can be steep and you have to think about security fixes for your Docker images and monitoring Amazon Elastic Container Service (Amazon ECS ) and Amazon ArchivedAmazon Web Services Implementing Microservices on AWS 8 Elastic Kubernetes Service (Amazon EKS ) eliminate the need to install operate and scale your own cluster management infrastructure With API calls you can launch and stop Docker enabled applications query the complete state of your cluster and access many familiar features like security groups Load Balancing Amazon Elastic Block Store (Amazon EBS) volumes and AWS Identity and Access Management (IAM) roles AWS Fargate is a serverless compute engine for containers that works with both Amazon ECS and Amazon EKS With Fargate you no longer have to worry about provisioning enough compute resources for your container applications Fargate can launch tens of thousands of containers and easily scale to run your most mission critical applications Amazon ECS supports container placement strategies and constraints to customize how Amazon ECS places and ends tasks A task placement constraint is a rule that is considered during task placement You can associate attributes which are essentially keyvalue pairs to your container instances and then use a constraint to pl ace tasks based on these attributes For example you can use constraints to place certain microservices based on instance type or instance capability such as GPU powered instances Amazon EKS runs up todate versions of the open source Kubernetes softwar e so you can use all the existing plugins and tooling from the Kubernetes community Applications running on Amazon EKS are fully compatible with applications running on any standard Kubernetes environment whether running in on premises data centers or public clouds Amazon EKS integrates IAM with Kubernetes enabling you to register IAM entities with the native authentication system in Kubernetes There is no need to manually set up credentials for authenticating with the Kubernetes control plane The IAM integration enable s you to use IAM to directly authenticate with the control plane itself a nd provide fine granular access to the public endpoint of your Kubernetes control plane Docker images used in Amazon ECS and Amazon EKS can be stored in Amazon Elastic Container Registry (Amazon ECR ) Amazon ECR eliminates the need to operate and scale the infrastructure required to power your container registry Continuous integration and continuous delivery (CI/C D) are best practice s and a vital part of a DevOps initiative that enables rapid software changes while maintaining system stability and security However this is out of scope for this whitepaper For m ore ArchivedAmazon Web Services Implementing Microservices on AWS 9 information refer to the Practicin g Continuous Integration and Continuous Delivery on AWS whitepaper Private links AWS PrivateLink is a highly available scalable technology that enables you to privately connect your virtual private cloud (VPC) to supported AWS services services hosted by other AWS accounts (VPC endpoi nt services) and supported AWS Marketplace partner services You do not require an internet gateway network address translation device public IP address AWS Direct Connect connection or VPN connection to communicate with the service Traffic between your VPC and the service does not leave the Amazon network Private links are a great way to increase the isolation and security of microservices architecture A microservice for example could be deployed in a totally separate VPC fronted by a load balancer and exposed to other microservices through an AWS PrivateLink endpoint With this setup using AWS PrivateLink the network traffic to and from the microservice never traverses the public internet One use case for such isolation includes regulatory compliance for services handling sensitive data such as PCI HIPPA and EU/US Privacy Shield Additionally AWS PrivateLink allows connecting microservices across different accounts and Amazon VPCs with no need for firewall rules path definitions or route tables; simplifying network management Utilizing PrivateLink software as a service (SaaS ) providers and ISVs can offer their microservices based solutions with complete operational isolation and secure access as well Data store The data store is used to persist data needed by the microservices Popular stores for session data are in memory caches such as Memcached or Redis AWS offers both technologies as part of the managed Amazon ElastiCache service Putting a cache between application servers and a d atabase is a common mechanism for reducing the read load on the database which in turn may enable resources to be used to support more writes Caches can also improve latency Relational databases are still very popular to store structured data and business objects AWS offers six database engines (Microsoft SQL Server Oracle MySQL ArchivedAmazon Web Services Implementing Microservices on AWS 10 MariaDB PostgreSQL and Amazon Aurora ) as managed services through Amazon Relational Database Service (Amazon RDS ) Relational databases however are not designed for endless scale which can make it difficult and time intensive to apply techniques to support a high number of queries NoSQL databases have been designed to favor scalability performance and availability over the consistency of relational databases One important element of NoSQL databases is that they typically don’t enforce a strict schema Data is distributed over partitions that can be scaled horizontally and is retrieved using partition keys Because individual microservices are designed to do one thing well they typically have a simplified data model that might be well suited to NoSQL persistence It is important to understand that NoSQL databases have different access patterns than relational databases For example it is not possible to join tables If this is necessary the logic has to be implemented in the application You can use Amazon DynamoDB to create a database table that can store and retrieve any amount of data and serve any level of request traffic DynamoDB delivers single digit millisecond performance however there are cert ain use cases that require response times in microseconds Amazon DynamoDB Accelerator (DAX) p rovides caching capabilities for accessing data DynamoDB also offers an automatic scaling feature to dynamic ally adjust throughput capacity in response to actual traffic However there are cases where capacity planning is difficult or not possible because of large activity spikes of short duration in your application For such situations DynamoDB provides an on demand option which offers simple pay perrequest pricing DynamoDB on demand is capable of serving thousands of requests per second instantly without capacity planning Reducing operational complexity The architecture previously described in this whitepaper is already using managed services but Amazon Elastic Compute Cloud (Amazon EC2 ) instances still need to be managed The operational efforts needed to run maintain and monitor microservices can be further reduced by using a fully serverless architecture ArchivedAmazon Web Services Implementing Microservices on AWS 11 API implementation Architecting deploying monitoring continuously improving and maintaining an API can be a time consuming task Sometimes different versions of APIs need to be run to assure backward compatibility for all clients The different stages of the development cycle ( for example development testing and production) further multiply operational efforts Authorization is a critical feature for all APIs but it is us ually complex to build and involves repetitive work When an API is published and becomes successful the next challenge is to manage monitor and monetize the ecosystem of thirdparty developers utilizing the APIs Other important features and challenges include throttling requests to protect the backend services caching API responses handling request and response transformation and generating API definitions and documentation with tools such as Swagger Amazon API Gateway addresses those challenges and reduces the operational complexity of creating and maintaining RESTful APIs API Gateway allows you to create your APIs programmatically by importing Swagger definitions using either the AWS API or the AWS Management Console API Gateway serves as a front door to any web application running on Amazon EC2 Amazon ECS AWS Lambda or in any on premises environment Basically API Gateway allows you to run APIs without having to manage servers The following f igure illustrates how API Gateway handles API calls and interacts with other components Requests from mobile devices websites or other backend services are routed to the closest CloudFront Point of Presence to minimize latency and provide optimum user experience ArchivedAmazon Web Services Implementing Microservices on AWS 12 API Gateway call flow Serverless microservices “No server is easier to manage than no server ” — AWS re:Invent Getting rid of servers is a great way to eliminate operational complexity Lambda is tightly integrated with API Gateway The ability to make synchronous calls from API Gateway to Lambda enables the creation of fully serverless applications and is described in detail in the Amazon API Gateway Developer Guide The following figure shows the architecture of a serverless microservice with AWS Lambda where the complete service is built out of managed services which eliminates the architectural burden to design for scale and high availability and eliminates the operational efforts of running and monitoring the microservice’s underlying infrastructure ArchivedAmazon Web Services Implementing Microservices on AWS 13 Serverless microservice using AWS Lambda A similar implementation that is also based on serverless services is shown in the following figure In this architecture Docker containers are used with Fargate so it’s not necessary to care about the underlying infrastruc ture In addition to DynamoDB Amazon Aurora Serverless is used which is an ondemand autoscaling configuration for Aurora (MySQL compatible edition) where the database will automatically start up shut down and scale capacity up or down based on your application's needs ArchivedAmazon Web Services Implementing Microservices on AWS 14 Serverless microservice using Fargate Disaster recovery As previously mentioned in the introduction of this whitepaper typical microservices applications are implemented using the Twelve Factor Application patterns The Processes section states that “Twelve factor processes are stateless and share nothing Any data that needs to persist must be sto red in a stateful backing service typically a database” For a typical microservices architecture this means that the main focus for disaster recovery should be on the downstream services that maintain the state of the application For example t hese can be file systems databases or queues for example When creating a disaster recovery strategy organizations most commonly plan for the recovery time objective and recovery point objective Recovery time objective is the maximum acceptable delay between the interruption of service and restoration of service This objective determines what is considered an acceptable time window when service is unavailable and is defined by the organization ArchivedAmazon Web Services Implementing Microservices on AWS 15 Recovery point objective is the maximum acceptable amount of time since the last data recovery point This objective determines what is considered an acceptable loss of data between the last recovery point and the interruption of service and is defined by the organization For more information refer to the Disaster Recovery of Workloads on AWS: Recovery in the Cloud whitepaper High availability This section take s a closer l ook at high availability for different compute options Amazon EKS runs Kubernetes control and data plane instances across multiple Availability Zones to ensure high availability Amazon EKS automatically detects and replaces unhealthy control plane instan ces and it provides automated version upgrades and patching for them This control plane consists of at least two API server nodes and three etcd nodes that run across three Availability Zones within a region Amazon EKS uses the architecture of AWS Regio ns to maintain high availability Amazon ECR hosts images in a highly available and high performance architecture enabling you to reliably deploy images for container applications across Availability Zones Amazon ECR works with Amazon EKS Amazon ECS and AWS Lambda simplifying development to production workflow Amazon ECS is a regional service that simplifies running containers in a highly available manner across multiple Availability Zones within a n AWS Region Amazon ECS includes multiple scheduling strategies that place containers across your clusters based on your resource needs (for example CPU or RAM) and availability requirements AWS Lambda runs your function in multiple Availability Zones to ensure that it is available to process events in cas e of a service interruption in a single zone If you configure your function to connect to a virtual private cloud ( VPC) in your account specify subnets in multiple Availability Zones to ensure high availability Deploying Lambda based applications You can use AWS CloudFormation to define deploy and configure serverless applications ArchivedAmazon Web Services Implementing Microservices on AWS 16 The AWS Serverless Application M odel (AWS SAM ) is a convenient way to define serverless applications AWS SAM is natively supported by CloudFormation and defines a simplified syntax for expressing serverless resources To deploy your application specify the resources you need as part of your application along with their associated permissions policies in a CloudFormation template package your deployment artifacts and deploy the template Based on AWS SAM SAM Local is an AWS Command Line Interface tool that provides an environm ent for you to develop test and analyze your serverless applications locally before uploading them to the Lambda runtime You can use SAM Local to create a local testing environment that simulates the AWS runtime environment Distributed systems componen ts After looking at how AWS can solve challenges related to individual microservices the focus moves to on cross service challenges such as service discovery data consistency asynchronous communication and distributed monitoring and auditing Service discovery One of the primary challenges with microservice architecture s is enabl ing services to discover and interact with each other The distributed characteristics of microservice architectures not only make it harder for services to communicate but also presents other challenges such as checking the health of those systems and announcing when new applications become available You also must decide how and where to store meta information such as configuration data that can be used by applicat ions In this section several techniques for performing service discovery on AWS for microservices based architectures are explored DNS based service discovery Amazon ECS now includes integrated service discovery that enables your containerized services to discover and connect with each other Previously to ensure that services were able to discover and connect with each other you had to configure and run your own service discovery system based on Amazon Route 53 AWS Lambda and ECS event stream s or connect every service to a load balancer ArchivedAmazon Web Services Implementing Microservices on AWS 17 Amazon ECS creates and manages a registry of service names using the Route 53 Auto Naming API Names are automatically mapped to a set of DNS records so that you can refer to a service by name in your code and write DNS queries to have the name resolve to the service’s endpoint at runtime You can specify health check conditions in a service's task definition and Amazon ECS ensures that only healthy service endpoints are returned by a service lookup In addition you can also use unified service discovery for services managed by Kubernetes To enable this integration A WS contributed to the External DNS project a Kubernetes incubator project Another option is to use the capabilities of AWS Cloud Map AWS Cloud Map extends the capabilities of the Auto Naming APIs by providing a service registry for resources such as Internet Protocols ( IPs) Uniform Resource Locators ( URLs ) and Amazon Resource Names ( ARNs ) and offering an APIbased service discovery mechanism with a faster change propagation and the ability to use attributes to narrow down the set of discovered resources Existing Route 53 Auto Naming resources are upgraded automatically to AWS Cloud Map Third party software A different approach to implementing service discovery is using third party software such as HashiCorp Consul etcd or Netflix Eureka All three examples are distributed reliable keyvalue stores For HashiCorp Consul there is an AWS Quick Start that sets up a flexible scalable AWS Cloud environment an d launches HashiCorp Consul automatically into a configuration of your choice Service meshes In an advanced microservices architecture the actual application can be composed of hundreds or even thousands of services Often the most complex part of the application is not the actual services themselves but the communication between those services Service meshes are an additional layer for handling interservice communication which is responsible for monit oring and controlling traffic in microservice s architectures This enables tasks like service discovery to be completely handled by this layer Typically a service mesh is split into a data plane and a control plane The data plane consists of a set of intelligent proxies that are deployed with the application code as a ArchivedAmazon Web Services Implementing Microservices on AWS 18 special sidecar proxy that intercepts all network communication between microservices The control plane is responsible for communicating with the proxies Service meshes are transpare nt which means that application developers don’t have to be aware of this additional layer and don’t have to make changes to existing application code AWS App Mesh is a service mesh that provides applicati onlevel networking to enable your services to communicate with each other across multiple types of compute infrastructure App Mesh standardizes how your services communicate giving you complete visibility and ensuring high availability for your applicat ions You can use App Mesh with existing or new microservices running on Amazon EC2 Fargate Amazon ECS Amazon EKS and self managed Kubernetes on AWS App Mesh can monitor and control communications for microservices running across clusters orchestration systems or VPCs as a single application without any code changes Distributed data management Monolithic applications are typically backed by a large relational database which defines a single data model common to all application components In a microservices approach such a central database would prevent the goal of building decentralized and independent components Each microservice component should have its own data persistence layer Distributed data management however rais es new challenges As a consequence of the CAP theorem distributed microservice architectures inherently trade off consistency for performance and need to embrace eventual consistency In a distributed system business transactions can span multiple microservices Because they cannot use a single ACID transaction you can end up with partial executions In this case we wou ld need some control logic to redo the already processed transactions For this purpose t he distributed Saga pattern is commonly used In the case of a failed business transaction Saga orchestrates a series of compensating transactions that undo the changes that were made by the preceding transactions AWS Step Functions make it easy to implement a Saga execution coordinator as shown in the following figure ArchivedAmazon Web Services Implementing Microservices on AWS 19 Saga execution coordinator Building a centralized store of critical reference data that is curated by core data management tools and procedures provides a means for microservices to synchronize their critical data and possibly roll back state Using AWS Lambda with scheduled Amazo n CloudWatch Events you can build a simple cleanup and deduplication mechanism It’s very common for state changes to affect more than a single microservice In such cases event sourcing has proven to be a useful pattern The core idea behind event sourcing is to represent and persist every application change as an event record Instead of persisting applicatio n state data is stored as a stream of events Database transaction logging and version control systems are two well known examples for event sourcing Event sourcing has a couple of benefits: state can be determined and reconstructed for any point in time It naturally produces a persistent audit trail and also facilitates debugging In the context of microservices architectures event sourcing enables decoupling different parts of an application by using a publish and subscribe pattern and it feeds the s ame event data into different data models for separate microservices Event sourcing is frequently used in conjunction with the Command Query Responsibility Segregation (CQRS) pattern to decouple read from write workloads and optimize both for performance scalability and security In traditional data management systems commands and queries are run against the same data repository The following figure shows how the event sourcing patter n can be implemented on AWS Amazon Kinesis Data Streams serves as the main component of the central event store which captures application changes as events and persists them on ArchivedAmazon Web Services Implementing Microservices on AWS 20 Amazon S3 The figure depicts three different microservices composed of API Gateway AWS Lambda and DynamoDB The arrows indicate the flow of the events: when Microservice 1 experiences an event state change it publishes an event by writing a message into Kinesis Data Streams All microservices run their own Kinesis Data Streams application in AWS Lambda which reads a copy of the message filters it based on relevancy for the microservice and possibly forwards it for further processing If your function re turns an error Lambda retries the batch until processing succeeds or the data expires To avoid stalled shards you can configure the event source mapping to retry with a smaller batch size limit the number of retries or discard records that are too old To retain discarded events you can configure the event source mapping to send details about failed batches to an Amazon Simple Queue Service (SQS ) queue or Amazon Simple Notification Service (SNS) topic Event sourcing pattern on AWS Amazon S3 durably stores all events across all microservices and is the single source of truth when it comes to debugging recovering application state or auditing application changes There are two primary reasons why records may be delivered more than one time to your Kinesis Data Streams application: producer retries and consumer retries Your application must anticipate and appropriately handle processing individual records multiple times ArchivedAmazon Web Services Implementing Microservices on AWS 21 Configuration management In a typical microservices architecture with dozens of different services each service needs access to several downstream services and infrastructure components that expose data to the service Examples could be message queues databases and other micros ervices One of the key challenges is to configure each service in a consistent way to provide information about the connection to downstream services and infrastructure In addition the configuration should also contain information about the environment in which the service is operating and restarting the application to use new configuration data shouldn’t be necessary The third principle of the Twelve Factor App patterns covers this topic: “ The twelve factor app stores config in environment variables (often shortened to env vars or env)” For Amazon ECS environment variables can be passed to the container by using the environment container definition parameter which maps to the env option to docker run Environment variables can be passed to your containers in bulk by using the environme ntFiles container definition parameter to list one or more files containing the environment variables The file must be hosted in Amazon S3 In AWS Lambda the runtime makes environment variables available to your code and sets additional environment varia bles that contain information about the function and invocation request For Amazon EKS you can define environment variables in the env field of the configuration manifest of the corresponding pod A different way to use env variables is to use a ConfigMa p Asynchronous communication and lightweight messaging Communication in traditional monolithic applications is straightforward —one part of the application uses method calls or an internal event distribution mechanism to communicate with the other parts If the same application is implemented using decoupled microservices the communication between different parts of the application must be implemented using network communication REST based communication The HTTP/S protocol is the most popular way to implement synchronous communication between microservices In most cases RESTful APIs use HTTP as a ArchivedAmazon Web Services Implementing Microservices on AWS 22 transport layer The REST architectural style relies on stateless communication uniform interfaces and standard methods With API Gateway you can create an API that acts as a “front door” for applications to access data business logic or functionality from your backend services API developers can create APIs that access AWS or other web services as well as data stored in the AWS Cloud An API object defined with the API Gateway service is a group of resources and methods A resource is a typed object within the domain of an API and may have associated a data model or relationships to other resources Each resource can be configured to respond to one or more methods that is standard HTTP verbs such as GET POST or PUT REST APIs can be deployed to different stages and versioned as well as cloned to new versions API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls including traffic management authorization and access control monitoring and API version management Asynchronous messaging and event passing Message passing is a n additional pattern used to implement communication between microservices Services communicate by exchanging messages by a queue One major benefit of this communication style is that it’s not necessary to have a service discovery and services are loosely couple d Synchronous systems are tightly coupled which means a problem in a synchronous downstream dependency has immediate impact on the upstream callers Retries from upstream callers can quickly fan out and amplify problems Depending on specific requirements like protocols AWS offers different services which help to implement this pattern One possible implementation uses a combination of Amazon Simple Queue Service (Amazon SQS ) and Amazon Simple Notification Service (Amazon SNS) Both services work closely together Amazon SNS enable s applications to send messages to multiple subscribers through a push mechanism By using Amazon SNS and Amazon SQS together one message can be delivered to multiple consumers The following figure demonstrates the integration of Amazon SNS and Amazon SQS ArchivedAmazon Web Services Implementing Microservices on AWS 23 Message bus pattern on AWS When you sub scribe an SQS queue to an SNS topic you can publish a message to the topic and Amazon SNS sends a message to the subscribed SQS queue The message contains subject and message published to the topic along with metadata information in JSON format Another option for building event driven architectures with event sources spanning internal applications third party SaaS applications and AWS services at scale is Amazon EventBridge A fully managed event bus service EventBridge receives events from disparate sources identifies a target based on a routing rule and delivers near realtime data to that target including AWS Lambda Amazon SNS and Amazon Kinesis Streams among others An inbound event can also be customized by input transformer prior to delivery To develop event driven applications sig nificantly faster EventBridge schema registries collect and organize schemas including schemas for all events generated by AWS services Customers can also d efine custom schemas or use an infer schema option to discover schemas automatically In balance however a potential trade off for all th ese features is a relatively higher latency value for EventBridge delivery Also the default throughput and quotas for EventBridge may require an increase through a support request based on use case A different implementation strategy is based on Amazon MQ which can be used if existing software is using open standard APIs and protocols for messaging including JMS NMS AMQP STOMP MQTT and WebSocket Amazon SQS exposes a custom ArchivedAmazon Web Services Implementing Microservices on AWS 24 API which means if you have an existing application that you want to migrate from—for example an onpremises environment to AWS —code changes are necessary With Amazon MQ t his is not necessary in many cases Amazon MQ manages the administration and maintenance of ActiveMQ a popular open source message broker The underlying infrastructure is automatically provisioned for high availability and message durability to support the reliability of your applications Orchestration and state management The distributed character of microservices makes it challenging to orchestrate workflows when multiple microservices are involved Developers might be tempted to add orchestra tion code into their services directly This should be avoided because it introduces tighter coupling and makes it harder to quickly replace individual services You can use AWS Step Functions to build applications from individual components that each perform a discrete function Step Fu nctions provides a state machine that hides the complexities of service orchestration such as error handling serialization and parallelization This lets you scale and change applications quickly while avoiding additional coordination code inside servic es Step Functions is a reliable way to coordinate components and step through the functions of your application Step Functions provides a graphical console to arrange and visualize the components of your application as a series of steps This makes it easier to build and run distributed services Step Functions automatically starts and tracks each step and retries when there are errors so your application executes in order and as expected Step Functions logs the state of each step so when something goes wrong you can diagnose and debug problems quickly You can change and add steps without even writing code to evolve your application and innovate faster Step Functions is part of the AWS serverless platform and supports orchestration of Lambda functions as well as applications based on compute resources such as Amazon EC2 Amazon EKS and Amazon ECS and additional services like Amazon SageMaker and AWS Glue Step Functions manages the operations and underlying infrastructure for you to help ensure that your application is available at any scale ArchivedAmazon Web Services Implementing Microservices on AWS 25 To build workflows Step Functions uses the Amazon States Language Workflows can contain sequential or parallel steps as well as branching steps The following figure shows an example workflow for a microservices architecture combining sequential and parallel steps Invoking such a workflow can be done either through the Step Functions API or with API Gateway An example of a microservices workflow invoked by Step Functions ArchivedAmazon Web Services Implementing Microservices on AWS 26 Distributed monitoring A microservices architecture consists of many different distributed parts that have to be monitored You can use Amazon CloudWatch to collect and track metrics centralize and monitor log files set alarms and automatically react to changes in your AWS environment CloudWatch can monitor AWS resources such as Amazon EC2 instances DynamoDB tables and Amazon RDS DB instances as well as custom metrics generated by your applications and services and any log files your applications generate Moni toring You can use CloudWatch to gain system wide visibility into resource utilization application performance and operational health CloudWatch provides a reliable scalable and flexible monitoring solution that you can start using within minutes You no longer need to set up manage and scale your own monitoring systems and infrastructure In a microservices architecture the capability of monitoring custom metrics using CloudWatch is an additional benefit because developers can decide which metrics should be collected for each service In addition dynamic scaling can be implemented based on custom metrics In addition to Amazon Cloudwat ch you can also use CloudWatch Container Insights to collect aggregate and summari ze metrics and logs from your containeri zed applications and microservices CloudWatch Container Insights automatically collects metrics for many resources such as CPU m emory disk and network and aggregate as CloudWatch metrics at the cluster node pod task and service level Using CloudWatch Container Insights you can gain access to CloudWatch Container Insights dashboard metrics It also provides diagnostic inform ation such as container restart failures to help you isolate issues and resolve them quickly You can also set CloudWatch alarms on metrics that Container Insights collects Container Insights is available for Amazon ECS Amazon EKS and Kubernetes platforms on Amazon EC2 Amazon ECS support includes support for Fargate Another popular option especially for Amazon EKS is to use Prometheus Prometheus is an open source monitoring and alerting toolkit that is often used in combination with Grafana to visualize the collected metrics Many Kubernetes components store metrics at /metrics and Prometheus can scrape these metrics at a regular interval ArchivedAmazon Web Services Implementing Microservices on AWS 27 Amazon Managed Service for Prometheus (AMP) is a Prometheus compatible monitoring service that enables you to monitor containerized applica tions at scale With AMP you can use the open source Prometheus query language (PromQL) to monitor the performance of containerized workloads without having to manage the underlying infrastructure required to manage the ingestion storage and querying of operational metrics You can collect Prometheus metrics from Amazon EKS and Amazon ECS environments using AWS Distro for OpenTelemetry or Prometheus servers as collection agents AMP is often used in combination with Amazon Managed Service for Grafana (A MG) AMG makes it easy to query visualize alert on and understand your metrics no matter where they are stored With AMG you can analy ze your metrics logs and traces without having to provision servers configure and update software or do the heavy lifting involved in securing and scaling Grafana in production Centralizing logs Consistent logging is critical for troubleshooting and identifying issues Microservices enable teams to ship many more releases than ever before and encourage engineering teams to run experiments on new features in production Understanding customer impact is crucial to gradually improving an application By default m ost AWS services centralize th eir log files The primary destinations for log files on AWS are Amazon S3 and Amazon CloudWatch Logs For applications running on Amazon EC2 instances a da emon is available to send log files to CloudWatch Logs Lambda functions natively send their log output to CloudWatch Logs and Amazon ECS includes support for the awslogs log driver that enables the centralization of container logs to CloudWatch Logs For Amazon EKS either Fluent Bit or Fluentd can forward logs from the individual instances in the cluster to a centralized logging CloudWatch Logs where they are combined for higher level reporting using Amazon OpenSearch Service and Kibana Because of its smaller footprint and performance advantages Fluent Bit is recommended instead of Fluent d The following figure illustrates the logging capa bilities of some of the services Teams are then able to search and analyze these logs using tools like Amazon OpenSearch Service and Kibana Amazon Athena can be used to run a one time query against centralized log files in Amazon S3 ArchivedAmazon Web Services Implementing Microservices on AWS 28 Logging capabilities of AWS services Distributed tracing In many cases a set of microservices works together to handle a request Imagine a complex system consisting of tens of microservices in which an error occurs in one of the services in the call chain Even if every microservice is logging properly and logs are consolidated in a central system it can be difficult to find all relevant log messages The central idea of AWS X Ray is the use of correlation IDs which are unique identifiers attached to all requests and messages related to a specific event chain The trace ID is added to HTTP requests in specific tracing headers named XAmznTraceId when the request hits the first XRay integrated service (for example Application Load Balancer or API Gateway) and included in the response Through the X Ray SDK any microservice can read but can also add or updat e this header XRay works with Amazon EC2 Amazon ECS AWS Lambda and AWS Elastic Beanstalk You can use X Ray with applications written in Java Nodejs and NET that are deployed on these services ArchivedAmazon Web Services Implementing Microservices on AWS 29 XRay service map Epsagon is fully managed SaaS that includes tracing for all AWS services third party APIs ( through HTTP calls) and other common services such as Redis Kafka and Elastic The Epsagon service includes monitoring capabilities alerting to the most common services and payload visibility into each and every call your code is making AWS Distro for OpenTelemetry is a secure production ready AWS supported distribution of the OpenTelemetry project Part of the Cloud Native Computing Foundation AWS Distro for OpenTelemetry provides open source APIs libraries and agents to collect distributed traces and metrics for application monitoring With AWS Distro for OpenTelemetry you can instrument your applications just o ne time to send correlated metrics and traces to multiple AWS and partner monitoring solutions Use autoinstrumentation agents to collect traces without changing your code AWS Distro for OpenTelemetry also collects metadata from your AWS resources and managed services to correlate application performance data with underlying infrastructure data reducing the mean time to problem resolution Use AWS Distro for OpenTelemetry to instrument your applications running on Amazon EC2 Amazon ECS Amazon EKS on Amazon EC2 Fargate and AWS Lambda as well as on premises ArchivedAmazon Web Services Implementing Microservices on AWS 30 Options for log analysis on AWS Searching analyzing and visualizing log data is an important aspect of understanding distributed systems Amazon CloudWatch Logs Insights enables you to explore analyze an d visualize your logs instantly This allows you to troubleshoot operational problems Another option for analyzing log files is to use Amazon OpenSearch Service together with Kibana Amazon OpenSearch Service can be used for full text search structured search analytics and all three in combination Kibana is an open source data visualization plugin that seamless ly integrates with the Amazon OpenSearch Service The following figure demonstrates log analysis with Amazon OpenSearch Service and Kibana CloudWatch Logs can be configured to stream log entries to Amazon OpenSearch Service in near real time through a CloudWatch Logs subscription Kibana visualizes the data and exposes a convenient search interface to data stores in Amazon OpenSearch Service This solution can be used in combination with software like ElastAlert to implement an alerting system to send SNS notifications and emails create JIRA tickets and so forth if anomalies spikes or other patterns of interest are detected in the data ArchivedAmazon Web Services Implementing Microservices on AWS 31 Log analysis with Amazon OpenSearch Service and Kibana Another option for analyzing log files is to use Amazon Redshift with Amazon QuickSight QuickSight can be easily connected to AWS data services including Redshift Amazon RDS Aurora Amazon EMR DynamoDB Amazon S3 and Amazon Kinesis CloudWatch Logs can act as a centralized store for log data and in addition to only storing the data it is possible to stream log entries to Amazon Kinesis Data Firehose The following figure depicts a scenario where log entries are streamed from different sources to Redshift using CloudWatch Logs and Kinesis Data Firehose QuickSight uses the data stored in Redshift for analysis reporting and visualization ArchivedAmazon Web Services Implementing Microservices on AWS 32 Log analysis with Amazon Redshi ft and Amazon QuickSight The following f igure depicts a scenario of log analysis on Amazon S3 When the logs are stored in Amazon S3 buckets the log data can be loaded in different AWS data services such as Redshift or Amazon EMR to analyze the data stored in the log stream and find anomalies ArchivedAmazon Web Services Implementing Microservices on AWS 33 Log analysis on Amazon S3 Chattiness By breaking monolithic applications into small microservices the communication overhead increases because microservices have to talk to each other In many implementations REST over HTTP is used because it is a lightweight communication protocol but high message volumes can cause issues In some cases you might consider consolidating services that send many messages back and forth If you find yourself in a situation where you consolidate an increased number of services just to reduce chattiness you should review your problem domains and your domain model Protocols Earlier in this whitepaper in the section Asynchronous communication and lightweight messaging different possible protocols are discussed For microservices it is common to use protocols like HTTP Messages exchang ed by services can be encoded in different ways such as human readable formats like JSON or YAML or efficient binary formats such as Avro or Protocol Buffers ArchivedAmazon Web Services Implementing Microservices on AWS 34 Caching Caches are a great way to reduce latency and chattiness of microservices architectures Several caching layers are possible depending on the actual use case and bottlenecks Many microservice applications running on AWS use ElastiCache to reduce the volume of calls to other microservices by caching results locally API Gateway provides a bu ilt in caching layer to reduce the load on the backend servers In addition caching is also useful to reduce load from the data persistence layer The challenge for any caching mechanism is to find the right balance between a good cache hit rate and the timeliness and consistency of data Auditing Another challenge to address in microservices architectures which can potentially have hundreds of distributed services is ensuring visibility of user actions on each service and being able to get a good overall view across all services at an organizational level To help enforce security policies it is important to audit both resource access a nd activities that lead to system changes Changes must be tracked at the individual service level as well a s across services running on the wider system Typically changes occur frequently in microservices architectures which makes auditing changes even more important This section examines the key services and features within AWS that can help you audit your microservices architecture Audit trail AWS CloudTrail is a useful tool for tracking changes in microservices because it enables all API calls made in the AWS Cloud to be logged and sent to either CloudWa tch Logs in real time or to Amazon S3 within several minutes All user and automated system actions become searchable and can be analyzed for unexpected behavior company policy violations or debugging Information recorded includes a timestamp user and account information the service that was called the service action that was requested the IP address of the caller as well as request parameters and response elements CloudTrail allows the definition of multiple trails for the same account which enables different stakeholders such as security administrators software developers or IT ArchivedAmazon Web Services Implementing Microservices on AWS 35 auditors to create and manage their own trail If microservice teams have different AWS accounts it is possible to aggregate trails into a single S3 bucket The advantages of storing the audit trails in CloudWatch are that audit trail data is captured in real time and it is easy to reroute in formation to Amazon OpenSearch Service for search and visualization You can configure CloudTrail to log in to both Amazon S3 and CloudWatch Logs Events and realtime actions Certain changes in systems architectures must be responded to quickly and either action taken to remediate the situation or specific governance procedures to authorize the change must be initiated The integration of Amazon CloudWatch Events with CloudTrail allows it to generate events for all mutating API calls across all AWS services It is also possible to define custom events or generate events based on a fixed schedule When an event is fired and matches a defined rule a pre defined group of people in your organization can be immediately notified so that they can take the appropriate action If the required action can be automated the rule can automatically trigger a built in workflow or invoke a Lambda function to resolve the issue The following figure shows an environment where CloudTrail and CloudWatch Events work tog ether to address auditing and remediation requirements within a microservices architecture All microservices are being tracked by CloudTrail and the audit trail is stored in an Amazon S3 bucket CloudWatch Events becomes aware of operational changes as th ey occur CloudWatch Events responds to these operational changes and takes corrective action as necessary by sending messages to respond to the environment activating functions making changes and capturing state information CloudWatch Events sit on top of CloudTrail and triggers alerts when a specific change is made to your architecture ArchivedAmazon Web Services Implementing Microservices on AWS 36 Auditing and remediation Resource inventory and change management To maintain control over fast changing infrastructure configurations in an agile development envi ronment having a more automated managed approach to auditing and controlling your architecture is essential Although CloudTrail and CloudWatch Events are important building blocks to track and respond to infrastructure changes across microservices AWS Config rules enable a company to define security policies with specific rules to automatically detect track and alert you to policy violations The next example demonstrates how it is possible to detect inform and automatically react to non compliant configuration changes within your microservices architecture A member of the development team has made a change to the API Gateway for a microservice to allow the endpoint to accept inbound HTTP traffic rather than only allowing HTTPS requests Because this situation has been previously identified as a security compliance concern by the organization an AWS Config rule is already monitoring for this condition ArchivedAmazon Web Services Implementing Microservices on AWS 37 The rule identifies the change as a security violation and performs two actions: it creates a log of the detected change in an Amazon S3 bucket for auditing and it creates an SNS notification Amazon SNS is used for two purposes in our scenario: to send an email to a specified group to inform about the security violation and to add a message to an SQS queue Next the message is picked up and the compliant state is restored by changing the API Gateway configuration Detecting security violations with AWS Config Resources • AWS Architecture Center • AWS Whitepapers • AWS Architecture Monthly • AWS Architecture Blog • This Is My Architecture videos • AWS Answers • AWS Documentation ArchivedAmazon Web Services Implementing Microservices on AWS 38 Conclusion Microservices architecture is a distributed design approach intended to overcome the limitations of traditional monolithic architectures Microservices help to scale applications and organizations while improving cycle times However they also come with a couple of challenges that might add additional arc hitectural complexity and operational burden AWS offers a large portfolio of managed services that can help product teams build microservices architectures and minimize architectural and operational complexity This whitepaper guide d you through the relev ant AWS services and how to implement typical patterns such as service discovery or event sourcing natively with AWS services ArchivedAmazon Web Services Implementing Microservices on AWS 39 Document Revisions Date Description November 9 2021 Integration of Amazon EventBridge AWS OpenTelemetry AMP AMG Container Insights minor text changes August 1 2019 Minor text changes June 1 2019 Integration of Amazon EKS AWS Fargate Amazon MQ AWS PrivateLink AWS App Mesh AWS Cloud Map September 1 2017 Integration of AWS Step Functions AWS XRay and ECS event streams December 1 2016 First publication Contributors The following individuals contributed to this document: • Sascha Möllering Solutions Architecture AWS • Christian Müller Solutions Architecture AWS • Matthias Jung Solutions Architecture AWS • Peter Dalbhanjan Solutions Architecture AWS • Peter Chapman Solutions Architecture AWS • Christoph Kassen Solutions Architecture AWS ArchivedAmazon Web Services Implementing Microservices on AWS 40 • Umair Ishaq Solutions Architecture AWS • Rajiv Kumar Solutions Architecture AWS
General
SAP_HANA_on_AWS_Operations_Overview_Guide
SAP HANA on AWS Operations Overview Guide December 2017 The PDF version of the paper has been archived For the latest HTML version of the paper see: https://docsawsamazoncom/sap/latest/saphana/saphanaonawsoperationshtml Archived© 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the info rmation in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditio ns or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its custom ers Archived Contents Introduction 1 Administration 1 Starting and Stopping EC2 Instances Running SAP HANA Hosts 2 Tagging SAP Resources on AWS 2 Monitoring 4 Automation 4 Patching 5 Backup/Recovery 7 Creating an Image of an SAP HANA System 8 AWS Services and Components for Backup Solutions 9 Backup Destination 11 AWS Command Line Interface 12 Backup Example 13 Scheduling and Executing Backups Remotely 14 Restoring SAP HANA Backups and Snapshots 19 Networking 21 EBS Optimized Instances 22 Elastic Network Interfaces (ENIs) 22 Security Groups 23 Network Conf iguration for SAP HANA System Replication (HSR) 24 Configuration Steps for Logical Network Separation 25 SAP Support Access 26 Support Channel Setup with SAProuter on AWS 26 Support Channel Setup with SAProuter On Premises 28 Security 29 OS Hardening 29 Archived Disabling HANA Services 29 API Call Logging 29 Notifications on Access 30 High Availability and Disaster Recovery 30 Conclusion 30 Contributors 30 Appendix A – Configuring Linux to Recognize Ethernet Devices for Multiple ENIs 31 Notes 33 Archived Abstract Amazon Web Services (AWS) offers you the ability to run your SAP HANA systems of various sizes and operating systems Running SAP systems on AWS is very similar to running SAP systems in your data center To a SAP Basis or NetWeaver administrator there are minimal differences between the two environments There are a number of AWS Cloud considerations relating to security storage compute configurations management and monitoring that will help you get the most out of your SAP HANA implementatio n on AWS This whitepaper provides the best practices for deployment operations and management of SAP HANA systems on AWS The target audience for this whitepaper is SAP Basis and NetWeaver administrators who have experience running SAP HANA systems in an onpremises environment and want to run their SAP HANA systems on AWS ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 1 Introduction This guide provides best practice s for operating SAP HANA systems that have been deployed on Amazon Web Services (AWS) either using the SAP HANA Quick Start reference deployment process1 or manually following the instructions in Setting up AWS Resources and the SLES Operating System for SAP HANA Installation 2 This guide is not intended to replace any of the standard SAP documentation See the following SAP guides and notes: o SAP Library (helpsapcom) SAP HANA Administration Guide3 o SAP installation gui des4 (These require SAP Support Portal access ) o SAP notes5 (These require SAP Support Portal access ) This guide assumes that you have a basic kno wledge of AWS If you are new to AWS read the following guides before continuing with this guide: o Getting Started with AWS6 o What is Amazon EC2?7 In addition the following SAP on AWS guides can be found here:8 o SAP on AWS Implementation and Operations Guide provides best practices for achieving optimal performance availability and reliability and lower total cost of ownership (TCO) while running SAP solutions on AWS9 o SAP on AWS High Availability Guide explains how to configure SAP systems on Amazon Elastic Compute Cloud (Amaz on EC2 ) to protect your application from various single points of failure10 o SAP on AWS Backup and Recovery Guide explains how to back up SAP systems running on AWS in contrast to backing up SAP systems on traditional infrastructure11 Administration This section provides guidance on common administrative tasks required to operate an SAP HANA system including information about starting stopping and cloning systems ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 2 Start ing and Stopping EC2 Instances Running SAP HANA Hosts At any time you can stop one or multiple SAP HANA h osts Before stopping the EC2 instance of an SAP HANA host first stop SAP HANA on that instance When you resume the instance it will automatically start with the same IP address network and storage configuration as before You also have the option of using the EC2 Scheduler to schedule starts and stops of your EC2 instances12 The EC2 Scheduler relies on the native shutdown and start up mechanisms of the operating sy stem These native mechanisms will invoke the orderly shutdown and startup of your SAP HANA instance Here is an architectural diagram of how the EC2 S cheduler work s: Figure 1: EC2 Scheduler Tagging SAP Resources on AWS Tagging your SAP resources on AWS can significantly simplify identification security manageability and billing of those resources You can tag your resources using the AWS Management C onsole or by using the createtags functionality of the AWS Command Line Interface (AWS CLI ) This table lists some example tag name s and tag values : Tag Name Tag Value Name SAP server’s virtual (host) name ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 3 Tag Name Tag Value Environment SAP server’s landscape role such as: SBX DEV QAT STG PRD etc Application SAP solution or product such as: ECC CRM BW PI SCM SRM EP etc Owner SAP point of contact Service Level Know n uptime and downtime schedule After you have tagged your resources you can then apply specific security restrictions to them for example access control based on the tag values Here is an example of such a policy from our AWS blog :13 { "Version" : "2012 1017" "Statement" : [ { "Sid" : "LaunchEC2Instances" "Effect" : "Allow" "Action" : [ "ec2:Describe*" "ec2:RunInstances" ] "Resource" : [ "*" ] } { "Sid" : "AllowActionsIfYouAreTheOwner" "Effect" : "Allow" "Action" : [ "ec2:StopInstances" "ec2:StartInstances" "ec2:RebootInstances" "ec2:TerminateInstances" ] "Condition" : { "StringEquals" : { "ec2:ResourceTag/PrincipalId" : "${aws:userid}" } } ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 4 "Resource" : [ "*" ] } ] } The AWS Identity and Access Management ( IAM ) policy only allows specific permissions based on the tag value In this scenario the current user ID must match the tag value in order to be granted permissions For more information on tagging refer to our AWS documentation and our AWS blog 14 15 Monitoring There are various AWS SAP and third party solutions that you can leverage for monitoring your SAP workloads Here are some of the core AWS monitoring services: • Amazon CloudWatch – CloudWatch is a monitoring service for AWS resources16 It’s critical for SAP workloads where it’s used to collect resource utilization logs and create alarms to automatically react to changes in AWS resources • AWS CloudTrail – CloudTrail keeps track of all API calls made within your AWS account It captures key metrics about the API calls and can be useful for automating trail creation for your SAP resources Configuring CloudWatch detailed monitoring for SAP resources is mandatory for getting AWS and SAP support You can use native AWS monitoring services in a compl ement ary fashion with the SAP Solution Manager Third party monitoring tools can be found on AWS Marketplace 17 Automation AWS offers multiple options for programmatically scripting your resources to operate or scale them in a predictable and repeatable manner You can leverage AWS CloudFormation to aut omate and operate SAP systems on AWS Here are some examples for automating your SAP environment on AWS: ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 5 Area Activities AWS Services Infrastructure Deployment Provision new SAP environment SAP system cloning AWS CloudFormation18 AWS CLI19 Capacity Management Automate scaleup/scaleout of SAP application servers AWS Lambda 20 AWS Cloud Formation Operations SAP b ackup automation (see the Backup Example ) Perform ing monitor ing and visualization Amazon CloudWatch Amazon EC2 System s Manager Patching There are two ways for you to patch your SAP HANA database with alternative s for minimizing cost and/or downtime With AWS y ou can provision additional servers as needed to minimize downtime for patching in a cost effective manner You can also minimize risks by creating on demand copies of your existing production SAP HANA databases for life like production readiness testing This table summarizes the tradeoffs of the two patching methods : Patching Method Benefits Technologies Available Patch an existing server [x] Patch existing OS and DB [x] Longest downtime to existing server and DB [] No costs for additional on demand instances [] Lowest levels of relative complexity and setup tasks involved Native OS patching tools Patch Manager21 Native SAP HANA patching tools22 Provision and patch a new server [] Leverage latest AMIs (only DB patch needed) [] Shortest downtime to existing server and DB [] Can patch and test OS and DB separately and together [x] More costs for additional on demand instances [x] More complexity and setup tasks involved Amazon Machine Image (AMI) 23 AWS CLI24 AWS Cloud Formation25 SAP HANA System Replication26 SAP HANA System Cloning27 SAP HANA backups28 SAP Notes : 198488229 Using HANA System Replication for Hardware Exchange with minimum/zero downtime ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 6 Patching Method Benefits Technologies Available 191330230 HANA: Suspend DB connections for short maintenance tasks The first method (patch an existing server) involves patching the operating system (OS) and database (DB) components of your SAP HANA server The goal of the method is to minimize any additional server costs and avoid any tasks needed to set up additional systems or tests This method may be most appropriate if you have a well defined patching process and are satisfied with your current downtime and costs With this method you must use the correct OS update process and too ls for your Linux distribution S ee this SUSE blog31 and Red Hat FAQ page32 or check each vendor’s documentation for their specific processes and procedures In addition to patching tools provided by our Linux partners AWS offers a free of charge patching service33 called Patch Manager 34 At th e time of this writing Patch Manager support s Red Hat 35 Patch Manager is an automated tool that helps you simplify your OS patching process You can scan your EC2 instances for missing patches and automatically install them select the timing for patch rollouts control instance reboots and many other tasks You can also define auto approval rules for patches with an added ability to black list or white list specific patches control how the patches are deployed on the target instances (eg stop services before applying the patch) and schedule the automatic rollout through maintenance windows The second method (provision and patch a new server) involves provisioning a new EC2 instance that will receive a copy of your source system and database The goal of the method is to minimize downtime minimize risks (by having production data and executing production like testing) and hav e repeatable proc esses This method may be most appropriate if you are looking for higher degrees of automation to enable these goals and are comfortable with the trade offs This method is more complex and has a many more options to fit your requirements Certain options are not exclusive and can be used together For example your AWS CloudFormation template can include the latest Amazon Machine Images ( AMIs ) which you can then use to automate the provisioning set up and configuration of a new SAP HANA server ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 7 Here is an ex ample of a process that can be used to automate OS/HANA patching /upgrade : 1 Download the AWS CloudFormation template offered in the SAP HANA Quick Start 36 2 Update the CloudFormation template with the latest OS AMI ID and execute the updated template to provision a new SAP HANA server The latest OS AMI ID has the specific security patches that your organization needs As part of the provisioning process you need to pro vide the latest SAP HANA installation binaries to get to the required version This allow s you to provision on a new HANA system with the required OS version and security patches along with SAP HANA software versions 3 After the new SAP HANA system is available use one of the following methods to copy the data from the original SAP HANA instance to the newly created system : o SAP HANA native backup/restore o Use SAP HANA System Replication (HSR) technology to replicate the data and then perform an HSR take over o Take snapshots of the old system’s Amazon Elastic Block Store (Amazon EBS ) volumes and create new EBS volumes from it Mount them in the new environment (M ake sure that the HANA SID stays the same for minimal post processing ) o Use new SAP HANA 20 functionality such as SAP HANA Cloning 37 The new system will become a clone of the original system At the end of this process you will have a new SAP HANA system that is ready to test SAP Note 198488238 (Using HANA System Replication for Hardware Exchange with Minimum/Z ero Downtime ) has specifi c recommendations and guidelines on the process for promoting to production Backup and Recovery This section provides an overview of the AWS services used in the backup and recovery of SAP HANA systems and provides an example backup and recovery scenario This guide does not include detailed instructions on how to execute ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 8 database backups using native HANA backup and recovery features or third party backup tools Please refer to the standard OS SAP and SAP HANA documentation or the documentation provided by backup software vendor s In addition backup schedules frequency and retention periods m ight vary with your system type and business requirements See the following standard SAP documentation f or guidance on these topics (SAP notes require SAP Support Portal access ) Note : Both general and advanced backup and recovery concepts for SAP systems on AWS can be found in detail in the SAP on AWS Backup and Recovery Guide 39 SAP Note Description 164214840 FAQ: SAP HANA Database Backup & Recovery 182120741 Determining required recovery files 186911942 Checking backups using hdbbackupcheck 187324743 Checking recoverability with hdbbackupdiag check 165105544 Scheduling SAP HANA Database Backups in Linux 248417745 Sche duling backups for multi tenant SAP HANA Cockpit 20 Creating an Image of an SAP HANA System You can use the AWS Management Console or the command line to create your own AMI based on an existing instance46 For more information see the AWS documentation 47 You can use an AMI of your SAP HANA instance for the following purposes: o To c reate a full offline system backup (of the OS / usr/sap HANA shared backup data and log files ) – AMIs are automatically saved in multiple Availability Zones within the same Region o To move a HANA system from one R egion to another – You can create an image of an existing EC2 instance and move it to another Region by following the instructions in the AWS documentation 48 Once the AMI has been copied to the target R egion the new instance can be launched there ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 9 o To c lone an SAP HANA system – You can creat e an AMI of an existing SAP HANA system to create an exact clone of the system See the following section for additional information Note – See the restore section later in this whitepaper to view the recommended restore steps for production environments Tip: The SAP HANA system should be in a consistent state before you creat e an AMI To do this stop the SAP HANA instance before creating the AMI or by following the instructions in SAP Note 1703435 (requires SAP Support Portal access) 49 AWS Services and Components for Backup Solutions AWS provides a number of services and options for storage and backup including Amazon Simple Storage Service ( Amazon S3) AWS Identity and Access Management (IAM) and Amazon Glacier Amazon S3 Amazon S3 is the center of any SAP backup and recovery solution on AWS50 It provides a highly durable storage infrastructure designed for mission critical and primary data storage It is designed to provide 99999999999% durability and 9999% availability over a given year See the Amazon S3 documentation for detailed instructions on how to create and configure an S3 bucket to store your SAP HANA backup files51 AWS IAM With IAM you can securely control access to AWS services and resources for your users52 You can create and manage AWS users and groups and use permissions to grant user access to AWS resources You can create roles in IAM and manage permissions to control which operations can be performed by the entity or AWS service that assumes the role You can also define which entity is allowed to assume the role During the deployment process CloudFormation creates a n IAM role that allow s access to get objects from and/or put objects in to Amazon S3 That role is ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 10 subsequently assigned to each EC2 instance that is hosting SAP HANA master and worker nodes at launch time as they are deployed Figure 2 : IAM r ole example To ensure security that applies the principle of least privilege permissions for this role are limited only to actions that are required for backup and recovery {"Statement":[ {"Resource":"arn:aws:s3::: <yours3bucketname>/*" "Action":["s3:GetObject""s3:PutObject""s3:DeleteObject" "s3:ListBucket""s3:Get*""s3:List*"] "Effect":"Allow"} {"Resource":"*""Action":["s3:List*""ec2:Describe*""ec2:Attach NetworkInterface" "ec2:AttachVolume""ec2:CreateTags""ec2:CreateVolume""ec2:RunI nstances" "ec2:StartInstances"]"Effect":"Allow"}]} To add functions later you can use the AWS Management Console to modify the IAM role Amazon Glacier Amazon Glacier is an extremely low cost service that provides secure and durable storage for data archiving and backup53 Amazon Glacier is optimized for data that is infrequently accessed and provides multiple options like expedited standard and bulk methods for data retrieval With standard and bulk retrievals data is available in 3 5 hours or 5 12 hours respectively ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 11 However with expedited retrieval Amazon Glacier provides you with an option to retrieve data in 3 5 minutes which can be ideal for occasional urgen t requests With Amazon Glacier you can reliably store large or small amounts of data for as little as $001 per gigabyte per month a significant savings compared to on premises solutions You can use lifecycle policies as explained in the Amazon S3 Developer Guide to push SAP HANA backups to Amazon Glacier for long term archiv ing54 Backup Destination The primary difference between backing up SAP systems on AWS compared with traditional on premises infrastructure is the backup destination Tape is the typical backup destination used with on premises infrastructure On AWS backups are stored in Amazon S3 Amazon S3 has many benefits over tape including the ability to automatically store b ackups “offsite” from the source system since data in Amazon S3 is replicated across multiple facilities within the AWS R egion SAP HANA systems provisioned using the SAP HANA Quick Start reference deploy ment are configured with a set of EBS volumes to be used as an initial local backup destination HANA backups are first stored on these local EBS volumes and then copied to Amazon S3 for long term storage You can use SAP HANA S tudio SQL commands or the DBA Cockpit to start or schedule SAP HANA d ata backups L og backups are written automatically unless disabled The /backup file system is configured as part of the deployment process Figure 3 : SAP HANA file system l ayout ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 12 The SAP HANA globalini configuration file has been customized by the SAP HANA Quick Start reference deployment process as follows : database backups go directly to /backup/data/<SID> while automatic log archival files go to /backup/log/<SID> [persistence] basepath_shared = no savepoint_intervals = 300 basepath_datavolumes = /hana/data/<SID> basepath_logvolumes = /hana/log/<SID> basepath_databackup = /backup/data/<SID> basepath_logbackup = /backup/log/<SID> Some third party backup tools like Commvault NetBackup and TSM are integrated with Amazon S3 capabilities and can be used to trigger and save SAP HANA backups directly into Amazon S3 without needing to store th e backups on EBS volumes first AWS Command Line I nterface The AWS CLI which is a unified tool to manage AWS services is instal led as part of the base image55 Using various commands you can control multiple AWS services from the command line directly and aut omate t hem through scripts Access to your S3 bucket is available through the IAM role assigned to the instance (discussed earlier ) Using the AWS CLI commands for A mazon S3 you can list the contents of the previously created bucket back up files and restore files as explained in the AWS CLI documentation56 imdbmaster:/backup # aws s3 ls region=us east1 s3://node2 hanas3bucket gcynh5v2nqs3 Bucket: node2 hanas3bucket gcynh5v2nqs3 Prefix: LastWriteTime Length Name ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 13 Backup Example Here are the steps you might take for a typical backup task: 1 In the SAP HANA Backup E ditor choose Open Backup Wizard You can also open the B ackup Wizard by r ightclicking the system that you want to back up and choo sing Back Up a Select destination type File This will back up the database to files in the specified file system b Specify the backup destination ( /backup/data/<SID>) and the backup prefix Figure 4 : SAP HANA backup example c Choose Next and then Finish A confirmation message will appear when the backup is complete d Verify that the backup files are available at the OS level The next step is to push or synchronize the backup files from the /backup file system to Amazon S3 by using the aws s3 sync command57 imdbmaster:/ # aws s3 sync backup s3://node2 hanas3bucket gcynh5v2nqs3 region=us east1 ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 14 2 Use the AWS Management Console to v erify that the files have been pushed to Amazon S3 You can also use the aws s3 ls comma nd shown previously in the AWS Command Line Interface section 58 Figure 5 : Amazon S3 bucket contents after backup Tip: The aws s3 sync command will only upload new files that don’t exist in Amazon S3 Use a periodic ally scheduled cron job to sync and then delete files that have been uploaded See SAP Note 1651055 for scheduling periodic backup jobs in Linux and extend the supplied scripts with aws s3 sync commands59 Scheduling and Executing Backups Remotely The Amazon EC2 System s Manager Run Command along with Amazon CloudWatch Events can be leveraged to schedule backups for your HANA SAP system remotely with the need to log in to the EC2 instances You can also leverage cron or any other instance level scheduling mechanism The Systems Manager Run Command lets you remotely and securely manage the configuration of your managed instances A managed instance is any EC2 instance or on premises machine in your hybrid environment that has been configured for Systems Manager The Run Command enables you to automate common administrative tasks and perform ad hoc configuration changes at ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 15 scale You can use the Run Command from the Amazon EC2 console the AWS CLI Windows PowerShell or the AWS SDKs Systems Manager Prerequisites Systems Manager has the following prerequisites Supported Operating System (Linux) Instances must run a supported version of Linux 64bi t and 32b it systems: • Amazon Linux 201409 201403 or later • Ubuntu Server 1604 LTS 1404 LTS or 1204 LTS • Red Hat Enterprise Linux (RHEL) 65 or later • CentOS 63 or later 64bit systems only: • Amazon Linux 201509 201503 or later • Red Hat Enterprise Linux (RHEL) 7x or later • CentOS 71 or later • SUSE Linux Enterprise Server (SLES) 12 or higher Roles for Systems Manager Systems Manager requires an IAM role for instances that will process commands and a separate role for users executing commands Both roles require permission policies that enable them to communicate with the Systems Manager API You can choose to use Systems Manager managed policies or you can create your own roles and specify permissions For more information see Configuring Security Roles for Systems M anager 60 If you are configuring on premises servers or virtual machines ( VMs) that you want to configure using Systems Manager you must also configure an IAM service role For more information see Create an IAM Service Role 61 SSM Agent (EC2 Linux instances) SSM Agent processes Systems Manager requests and configures your machine as specified in the request You must download and install SSM Agent to your EC2 Linux instances For more information see Installing SSM Agent on Linux To schedule remote backups here are the high level steps: 1 Install and configure the Systems Manager agent on the EC2 instance For detailed installation steps please see http://docsawsamazoncom/systems manager/latest/userguide/ssm agenthtml#sysman install ssmagent ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 16 2 Provide SSM access to the EC2 instance role that is assigned to the SAP HANA instance For detailed info rmation on how to assign SSM access to a role please see http://docsawsamazoncom/sy stems manager/latest/userguide/systems manager accesshtml 3 Create an SAP HANA backup script A sample script is shown below You can use this as a starting point and then modify it to meet your requirement s #!/bin/sh set x S3Bucket_Name=<<Name of the S3 bucket where backup files will be copied>> TIMESTAMP=$(date +\ %F\_%H\%M) exec 1>/backup/data/${SAPSYSTEMNAME}/${TIMESTAMP}_backup_logout 2>&1 echo "Starting to take backup of Hana Database and Upload the backup files to S3" echo "Backup Timestamp for $SAPSYSTEMNAME is $TIMESTAMP" BACKUP_PREFIX=${SAPSYSTEMNAME}_${TIMESTAMP} echo $BACKUP_PREFIX # source HANA environment source $DIR_INSTANCE/hdbenvsh # execute command with user key hdbsql U BACKUP "backup data using file ('$BACKUP_PREFIX')" echo "HANA Backup is completed" echo "Continue with copying the backup files in to S3" echo $BACKUP_PREFIX sudo u root /usr/local/bin/aws s3 cp recursive /backup/data/${SAPSYSTEMNAME}/ s3://${S3Bucket_Name}/bkps/${SAPSYSTEMNAME}/data/ exclude "*" in clude "${BACKUP_PREFIX}*" echo "Copying HANA Database log files in to S3" sudo u root /usr/local/bin/aws s3 sync /backup/log/${SAPSYSTEMNAME}/ s3://${S3Bucket_Name}/bkps/${SAPSYSTEMNAME}/log/ exclude "*" include "log_backup*" sudo u root /usr/local/bin/aws s3 cp /backup/data/${SAPSYSTEMNAME}/${TIMESTAMP}_backup_logout s3://${S3Bucket_Name}/bkps/${SAPSYSTEMNAME} ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 17 Note : This script takes into consideration that hdbuserstore has a key named Backup 4 At this point you can test an one time backup by executing an ssm command directly : aws ssm send command instance ids <<HANA Master Instance ID>> document name AWS RunShellScript parameters commands="sudo u <HANA_SID>adm TIMESTAMP=$(date +\ %F\_%H\%M) SAPSYSTEMNAME=<HANA_SID> DIR_INSTANCE=/hana/shared/${SAPSYSTEMNAME}/HDB00 i /usr/sap/HDB/HDB00/hana_backupsh" Note : For this command to execute successfully you will have to enable <sid>adm login using sudo 5 Using CloudWatch E vents you can schedule backups remotely at any desired frequency Navigate to the Cloud Watch Events page and create a rule ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 18 Figure 6 : Amazon CloudWatch event rule creation When configuring the rule : • Choose Schedule • Select SSM Run Command as the Target • Select AWS RunShellScript (Linux) as the D ocument type • Choose InstanceIds or Tags as Target Keys ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 19 • Choose Constant under Configure Parameters and type the run command Restoring SAP HANA Backups and Snapshots Restor ing SAP Backups To restore your SAP HANA database from a backup perform the following steps : 1 If the backup files are not already available in the /backup file system but are in Amazon S3 restore the files from Amazon S3 by using the aws s3 cp command62 This command has the following syntax: aws region <region> cp <s3 bucket/path> recursive <backup prefix>* For e xample : imdbmaster:/backup/data/YYZ # aws region us east1 s3 cp s3://node2 hanas3bucket gcynh5v2nqs3/data/YYZ recursive include COMPLETE* 2 Recover the SAP HANA database by using the R ecovery Wizard as outlined in the SAP HANA Administration Guide 63 Specify File as the Destination Type and enter the correct B ackup Prefix Figure 7 : Restore example ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 20 3 When the recovery is complete you can resume normal operation s and clean up backup files from the /backup/<SID>/* directories Restor ing EBS/AMI Snapshots To r estore EBS snapshots perform the following steps: 1 Create a new volume from the snapshot: aws ec2 create volume region us west2 availability zone us west2a snapshot id snap 1234abc123a12345a volume type gp2 2 Attach the newly created volume to your EC2 host: aws ec2 attach volume region=us west2 volume id vol 4567c123e45678dd9 instance id i03add123456789012 device /dev/sdf 3 Mount the logical volume associated with SAP HANA data on the host: mount /dev/sdf /hana/data 4 Start your SAP HANA instance Note: For large mission critical systems we highly recommend that you execute the volume initialization command on the database data and log volumes after the AMI restore but before starting the database Executing the volume initialization command will help you avoid extensive wait times before the database is available Here is the sample fio command that you can use : sudo fio – filename=/dev/xvdf –rw=read –bs=128K –iodepth=32 – ioengine=libaiodirect=1 –name=volume initialize For m ore information about initializing Amazon EBS volumes see the AWS documentation 64 ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 21 Restoring AMI Snapshots You can restore your HAN SAP AMI snapshots through the AWS Management Console On the EC2 Dashboard select AMI s in the left hand navigation Choose the AMI that you want to restore expand Actions and select Launch Figure 8 : Restor e AMI snapshot Networking SAP HANA components communicate over the following logical network zones: • Client zone – t o communicate with different clients such as SQL clients SAP Application Server SAP HANA Extended Application Services ( XS) SAP HAN A Studio etc • Internal zone – t o communicate with hosts in a distributed SAP HANA system as well as for SAP HSR • Storage zone – t o persist SAP HANA data in the storage infrastructure for resumption after start or recovery after failure Separating network zones for SAP HANA is considered both an AWS and an SAP best practice because it enables you to isolate the traffic required for each communication channe l ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 22 In a traditional bare metal setup these different network zones are set up by having multiple physical network cards or virtual LANs ( VLANs ) Conversely on the AWS Cloud this network isolation can be achieved simply through the use of elastic networ k inter faces (ENI s) combined with s ecurity groups Amazon EBS optimized instances can also be used for further i solation for storage I/O EBSOptimized Instances Many newer Amazon EC2 instance types such as the X1 use an optimized configuration stack and provide additional dedicated capacity for Amazon EBS I/O These are called EBS optimized instances 65 This optimization provides the best performance for your EBS volumes by minimizing contention between Amazon EBS I/O and other traffic from your instance Figure 9 : EBS optimized instances Elastic Network Interfaces (ENI s) An ENI is a virtual network interface that you can attach to an EC2 instance in an Amazon Virtual Private Cloud (Amazon VPC) With ENI s you can create different logical network s by specifying multiple private IP addresses for your instances For more information about ENIs see the AWS documentation 66 In the following example two ENIs are attached to each SAP HANA node as well as in separate communication channel for storage ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 23 Figure 10 : ENIs a ttached to SAP HANA nodes Security Groups A security group acts as a virtual firewall that controls the traffic for one or more instances When you launch an instance you associate one or more security groups with the instance You add rules to each security group that allow traffic to or from its associated instances Y ou can modify the rules for a security group at any time The new rules are automatically applied to all instances that are associated with the security group To learn more about security groups see the AWS documentation 67 In the following example EN I1 of each instance shown is a member of the same security group that controls inbound and outbound network traffic for the client network ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 24 Figure 11: ENIs and se curity groups Network Configuration for SAP H ANA System Replication (HSR) You can configure a dditional ENIs and security groups to further isolate inter node communication as well as SAP HSR network traffic In Figure 10 ENI 2 is dedicated for inter node communication with its own security group (not shown) to secure client traffic from inter node communication ENI 3 is configured to secure SAP HSR traffic to another A vailability Zone within the same Region In this exam ple the target SAP HANA cluster would be configured with additional ENIs similar to the source environment and ENI 3 would share a common security group ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 25 Figure 12 : Further isolation with a dditional ENIs and s ecurity groups Configuration Steps for L ogical Network Separation To configure your logical network for SAP HANA follow these steps : 1 Create new security groups to allow for isolation of client internal communication and if applicable SAP HSR network traffic See Ports and Connections in the SAP HANA documentation to learn about the list of ports used for different network zones68 For more information about how to create and configure security groups see the AWS documentation 69 2 Use Secure Shell ( SSH ) to connect to your EC2 instance at the OS level Follow the steps described in Appendix A to configure the OS to properly recognize and name the Ethernet devices associated with the new elastic network interfaces (ENIs ) you will be creating 3 Create new ENI s from the AWS M anage ment Console or through the AWS CLI Make sure that the new ENIs are created in the subnet where your SAP HANA instance is deployed As you create each new ENI associate it with the appropriate security group you created in step 1 For more information ab out how to create a new ENI see the AWS documentation 70 4 Attach the ENIs you created to your EC2 instance where SAP HANA is installed For more information about how to attach an ENI to an EC2 instance see the AWS documentation71 ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 26 5 Create virtual host names and map them to the IP addresses associated with client internal and replication network interfaces Ensure that host nam etoIPaddress resolution is working by creating entries in all applicable host files or in the Domain Name System (DNS) When complete test that the virtual host names can be resolved from all SAP HANA nodes clients etc 6 For scale out deployments configure SAP HANA i nter service communication to let SAP HANA communicate over the internal network To learn more about this step s ee Configuring SAP HANA Inter Service Communication in the SAP HANA documentation72 7 Configure SAP HANA hostname resolution to let SAP HANA communicate over the replication network for SAP HSR To learn more about this step s ee Configuring Hostname Resolution for SAP HANA System Replication in the SAP HANA documentation 73 SAP Support Access In some situations it may be necessary to allow an SAP support engineer to access your SAP HANA s ystems on AWS The following information serves only as a supplement to the information contained in the “Getting Support” section of the SAP HANA Administration Guide 74 A few steps are required to configure proper connectivity to SAP These steps differ depending on whether you w ant to use an existing remote network connection to SAP or you are setting up a new connection directly with SAP from systems on AWS Support Channel Setup with SAProuter on AWS When setting up a direct support connection to SAP from AWS consider the following steps: 1 For the SAProuter instance c reate and configure a specific SAProuter security group which only allows the required inbound and outbound access to the SAP s upport network This should be limited to a specific IP address that SAP gives you to connect to along with TCP port 3299 See the Amazon EC2 security group documentation for additional details about creating and configuring s ecurity groups75 ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 27 2 Launch t he instance that the SAProuter software will be installed on into a public subnet of the Amazon VPC and assign it an Elastic IP a ddress (EIP) 3 Install the SAProuter software and create a saprouttab file that allows access from SAP to your SAP HANA system on AWS 4 Set up the connection with SAP For your internet connection use Secure Network Communication (SNC) For more information see the SAP Remote Support – Help page76 5 Modify the ex isting SAP HANA security groups to trust the new SAProuter security group you have created Tip: For added security shut down the EC2 instance that hosts the SAProuter service when it is not needed for support purposes Figure 13 : Support connectivity with SAProuter on AWS ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 28 Support Channel Setup with SAProuter OnPremises In many cases you may already have a support connection configured between your data center and SAP This can easily be extended to support SAP systems on AWS This scenario assumes that connectivity between your data center and AWS has already been established either by way of a secure VPN tunnel over the internet or by using AWS Direct Connect 77 You can extend this connectivity as follows : 1 Ensure that the proper saprouttab entries exist to allow access from SAP to resources in the Amazon VPC 2 Modify the SAP HANA s ecurity groups to allow access from the on premises SAProuter IP address 3 Ensure that the proper firewall ports are o pen on your gateway to allow traffic to pass over TCP port 3299 Figure 14 : Support connectivity with SAProuter onp remises ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 29 Security This section discusses additional security topics you may want to consider that are not covered in the SAP HANA Quick Start reference deployment guide Here are additional AWS security resources to help you achieve the level of security you require for your SAP HANA environment on AWS: • AWS Cloud Security C enter78 • CIS AWS Foundation whitepaper79 • AWS Cloud Security whitepaper80 • AWS Cloud Security Best Practices whitepaper81 OS Hardening You may want to lock down the OS configurat ion further for example to avoid providing a DB admin istrator with root credentials when logging into an instance You can also refer to the followin g SAP notes: • 1730999 : Configuration changes in HANA appliance82 • 1731000 : Unrecommended configuration changes83 Disabling HANA Services HANA services such as HANA XS are optional and should be deactivated i f they are not needed For instructions see SAP N ote 1697613 : Remove XS Engine out of SAP HANA d atabase 84 In case of service deactivation you should also remove the TCP ports from the SAP HANA AW S security groups for complete security API C all Logging AWS CloudTrail is a web service that records AWS API calls for your account and delivers log files to you85 The recorded information includes the identity of the API caller the time of the API call the source IP address of the API caller the request parameters and the response elements returned by the AWS service ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 30 With CloudTrail you can get a history of A WS API calls for your account including API calls made via the AWS Management Console AWS SDKs command line tools and higher level AWS services (such as CloudFormation) The AWS API call history produced by CloudTrail enables security analysis resourc e change tracking and compliance auditing Notifications on Access You can use Amazon Simple Notification Service ( Amazon SNS) or third party applications to set up n otifications on SSH l ogin to your email addre ss or mobile phone86 High Availability and Disaster Recovery For details and best practices for h igh availability and disaster recovery of SAP HANA systems running on AWS see High Availability and Disaster Recovery Options for SAP HANA on AWS 87 Conclusion This whitepaper discusse s best practices for the operation of SAP HANA systems on the AWS cloud The best practices provided in this paper will help you efficiently manage and achieve maximum benefit s from running your SAP HANA systems on the AWS C loud For feedback or questions please contact us at saponaws@amazoncom Contributors The following individuals and organizations contributed to this document: • Rahul Kabra Partner Solutions Architect AWS • Somckit Khemmanivanh Partner Solution s Architect AWS • Naresh Pasumarthy Partner Solutions Architect AWS ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 31 Appendix A – Configuring Linux to Recognize Ethernet Devices for M ultiple ENIs Follow these steps to configure the Linux operating system to recognize and name the Ethernet devices associated with the new elastic network interfaces (ENI s) created for logical network separation which was discussed earlier in this paper 1 Use SSH to connect to your SAP HANA host as ec2user and sudo to root 2 Remove the existing udev rule ; for example : hanamaster:# rm f /etc/udev/rulesd/70 persistent netrules Create a new udev rule that writes rules based on MAC address rather than other device attributes This will ensur e that on reboot eth0 is still eth0 eth1 is eth1 and so on For example: hanamaster:# cat <<EOF > /etc/udev/rulesd/75 persistent net generatorrules # Copyright (C) 2012 Amazoncom Inc or its affiliates # All Rights Reserved # # Licensed under the Apache License Version 20 (the "License") # You may not use this file except in compliance with the License # A copy of the License is located at # # http://awsamazoncom/apache20/ # # or in the "license" file accompanying this file This file is # distributed on an "AS IS" BASIS WITHOUT WARRANTIES OR CONDITIONS # OF ANY KIND either express or implied See the License for the ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 32 # specific language governing permissions and limitations under the # License # these rules generate rules for persistent network device naming SUBSYSTEM!="net" GOTO="persistent_net_generator_end" KERNEL!="eth*" GOTO="persistent_net_generator_end" ACTION!="add" GOTO="persistent_net_generator_end" NAME=="?*" GOTO="persistent_net_generator_end" # do not create rule for eth0 ENV{INTERFACE}=="eth0" GOTO="persistent_net_generator_end" # read MAC address ENV{MATCHADDR}="\ $attr{address}" # do not use empty address ENV{MATCHADDR}=="00:00:00:00:00:00" GOTO="persistent_net_generator_end" # discard any interface name not generated by our rules ENV{INTERFACE_NAME}=="?*" ENV{INTERFACE_NAME}="" # default comment ENV{COMMENT}="elastic network interface" # write rule IMPORT{program}="write_net_rules" # rename interface if needed ENV{INTERFACE_NEW}=="?*" NAME="\ $env{INTERFACE_NEW}" LABEL="persistent_net_generator_end" EOF 3 Ensure proper interface properties For example: hanamaster:# cd /etc/sysconfig/network/ hanamaster:# cat <<EOF > /etc/sysconfig/network/ifcfg ethN BOOTPROTO='dhcp4' MTU="9000" REMOTE_IPADDR='' STARTMODE='onboot' LINK_REQUIRED=no LNIK_READY_WAIT=5 EOF ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 33 4 Ensure that you can accommodate up to seven more Ethernet devices/ENIs and restart wicked For example: hanamaster:# for dev in eth{17} ; do ln s f ifcfg ethN /etc/sysconfig/network/ifcfg ${dev} done hanamaster:# systemctl restart wicked 5 Create and attach a new ENI to the instance 6 Reboot 7 After reboot modify /etc/iproute2/rt_tables Important: Repeat the following for each ENI that you attach to your instance For example: hanamaster:# cd /etc/iproute2 hanamaster:/etc/iproute2 # echo "2 eth1_rt" >> rt_tables hanamaster:/etc/iproute2 # ip route add default via 172161122 dev eth1 table eth1_rt hanamaster:/etc/iproute2 # ip rule 0: from all lookup local 32766: from all lookup main 32767: from all lookup default hanamaster:/etc/iproute2 # ip rule add from <ENI IP Address> lookup eth1_rt prio 1000 hanamaster:/etc/iproute2 # ip rule 0: from all lookup local 1000: from <ENI IP address> lookup eth1_rt 32766: from all lookup main 32767: from all lookup default Notes ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 34 1 http://docsawsamazoncom/quickstart/latest/sap hana/ or https://s3amazonawscom/quickstart reference/sap/hana/latest/doc/SAP+HANA+Quick+Startpdf 2 http://d0awsstaticcom/enterprise marketing/SAP/SAP HANA onAWS Manual Setup Guidepdf 3 https://helpsapcom/hana/SAP_HANA_Administration_Guide_enpdf 4 http://servicesapcom/instguides 5 http://servicesapcom/notes 6 http://docsawsamazoncom/gettingstarted/latest/awsgsg intro/introhtml 7 http://docsawsamazoncom/AWSEC2/latest/UserGuide/conceptshtml 8 http://awsamazoncom/sap/whitepapers/ 9 http: //d0awsstaticcom/enterprise marketing/SAP/SAP_on_AWS_Implementation_Guidepdf 10 http://d0awsstaticcom/enterprise marketing/SAP/SAP_on_AWS_High _Availability_Guide_v32pdf 11 http://d0awsstaticcom/enterprise marketing/SAP/sap onawsbackup and recovery guide v22pdf 12 https://awsamazoncom/answers/infrastructure management/ec2 scheduler/ 13 https://awsamazoncom/blogs/security/how toautomatically tagamazon ec2resources inresponse toapievents/ 14 http://docsawsamazoncom/AWSEC2/latest/UserGuide/Using_Tagshtml 15 https://awsamazoncom/blogs/aws/new awsresource tagging api/ 16 https://awsamazoncom/cloudwatch/ 17 https://awsamazoncom/marketplace 18 http://docsawsamazoncom/AWSCloudFormation/lat est/UserGuide/Gettin gStartedhtml 19 http://docsawsamazoncom/cli/latest/userguide/cli chap welcomehtml 20 http://docsawsamazoncom/lambda/latest/dg/getting startedhtml 21 https://awsamazoncom/ec2/systems manager/patch manager/ ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 35 22 https://helpsapcom/viewer/2c1988d620e04368aa4103bf26f17727/2000/e nUS/9731208b85fa4c2fa68c529404ffa75ahtml 23 http://docsawsamazoncom/AWSEC2/latest/UserGuide/AMIshtml 24 http://docsawsamazoncom/cli/latest/userguide/cli ec2launchhtml 25 https://awsamazoncom/cloudformation/ 26 https://helpsapcom/viewer/6b944 45c94ae495c83a19646e7c3fd56/2000/e nUS/38ad53e538ad41db9d12d22a6c8f2503html 27 https://helpsapcom/viewer/6b94445c94ae495c83 a19646e7c3fd56/2000/e nUS/c622d640e47e4c0ebca8cbe74ff9550ahtml 28 https://helpsapcom/viewer/6b94445c94ae495c83a19646e7c3fd5 6/2000/e nUS/ea70213a0e114ec29724e4a10b6bb176html 29 https://launchpadsupportsapcom/#/notes/1984882/E 30 https://launchpadsupportsapcom/#/notes/1913302/E 31 https://wwwsusecom/communities/blog/upgrading running demand instances public cloud/ 32 https://awsamazoncom/partners/redhat/faqs/ 33 https://awsamazoncom/about aws/whats new/2016/12/amazon ec2 systems manager now offers patch management/ 34 https://awsamazoncom/ec2/systems manager/patch manager/ 35 http://docsawsamazoncom/systems manager/latest/userguide/systems manager patchhtml 36 https://docsawsamaz oncom/quickstart/latest/sap hana/welcomehtml 37 https://helpsapcom/doc/6b94445c94ae495c83a19646e7c3fd56/2001/en US/c622d640e47e4c0ebca8cbe74ff9550ahtml 38 https://launchpadsupportsapcom/#/notes/1984882/E 39 http://d0awsstaticcom/enterprise marketing/SAP/sap onawsbackup and recovery guide v22pdf 40 http://servicesapcom/sap/support/notes/1642148 ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 36 41 http://servicesapcom/sap/support/notes/1821207 42 http://servicesapcom/sap/support/notes/1869119 43 http://servicesapcom/sap/support/notes/1873247 44 http://servicesapcom/sap/support/notes/1651055 45 http://servicesapcom/sap/support/notes/2484177 46 http://docsawsamazoncom/AWSEC2/latest/UserGuide/AMIshtml 47 http://docsawsamazoncom/AWSEC2/latest/UserGuide/creating anami ebshtml 48 http://docsawsamazoncom/AWSEC2/latest/UserGuide/CopyingAMIshtml 49 https://servicesapcom/notes/1703435 50 http://awsamazoncom/s3 / 51 http://awsamazoncom/documentation/s3/ 52 http://awsamazoncom/iam/ 53 http://awsamazoncom/glacier/ 54 http://docsawsamazoncom/AmazonS3/latest/dev/object archivalhtml 55 http://awsamazoncom/cli/ 56 http://docsawsamazoncom/cli/latest/reference/s3/ 57 http://docsawsamazoncom/cli/latest/reference/s3/synchtml 58 http://docsawsamazoncom/cli/latest/reference/s3/lshtml 59 http://se rvicesapcom/sap/support/notes/1651055 60 http://docsawsamazoncom/systems manager/latest/userguide/systems manager accesshtml 61 http://docsawsamazoncom/systems manager/latest/userguide/systems manager managedinstanceshtml#sysman service role 62 http://docsawsamazoncom/cli/latest/reference/s3/cphtml 63 https://helpsapcom/hana/SAP_HANA_Adminis tration_Guide_enpdf 64 http://docsawsamazoncom/AWSEC2/latest/UserGuide/ebs initializehtml ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 37 65 http://docsawsamazoncom/AWSEC2/latest/UserGuide/EBSOptimizedhtm l 66 https://docsawsamazoncom/AWSEC2/latest/UserGuide/using enihtml 67 http://docsawsamazoncom/AmazonVPC/latest/UserGuide/VPC_SecurityG roupshtml 68 https://helpsapcom/saphelp_hanaplatform/helpdata/en/a9/326f20b39342 a7bc3d08acb8ffc68a/framesethtm 69 http://docsawsamazoncom/AWSEC2/latest/UserGuide/using network securityhtml#creating security group 70 http://docsawsamazoncom/AWSEC2/latest/UserGuide/using enihtml#create_eni 71 http://docsawsamazoncom/AWSEC2/latest/UserGuide/using enihtml#attach_eni_running_stopped 72 https://helpsapcom/saphelp_hanaplatform/helpdata/en/bb/cb76c7fa7f45b 4adb99e60ad6c85ba/framesethtm 73 http://helpsapcom/saphelp_hanaplatform/helpdata/en/9a/cd6482a5154b7 e95ce72e83b04f94d/framesethtm 74 https://helpsapcom/hana/SAP_HANA_Administration_Guide_enpdf 75 http://docsawsamazoncom/AWSEC2/latest/UserGuide/using network securityhtml 76 https://supportsapcom/remote support/helphtml 77 http://awsamazoncom/directconnect/ 78 http://awsamazoncom/security/ 79 https://d0awsstaticcom/whitepapers/compliance/AWS_CIS_Foundations_ Benchmarkpdf 80 http://d0awsstaticcom/whitepapers/Security/AWS%20Security%20Whitep aperpdf ArchivedAmazon Web Services – SAP HANA on AWS Operations Overview Guide Page 38 81 http://d0awsstaticcom/whitepapers/aws security best practicespdf 82 https://servicesapcom/sap/support/notes/1730999 83 https://servicesapcom/sap/support/notes/1731000 84 https://servicesapcom/sap/support/notes/1697613 85 http s://awsamazoncom/cloudtrail/ 86 https://awsamazoncom/sns/ 87 http://d0awsstaticcom/enterprise marketing/SAP/ saphana onawshigh availability disaster recovery guidepdf Archived
General
RealTime_Communication_on_AWS
RealTime Communication on AWS Best Practices for Designing Highly Available and Scalable Real Time Communication (RTC) Workloads on AWS February 2020 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 Fundamental Components of RTC Architecture 2 Softswitch/PBX 2 Session Border Controller (SBC) 3 PSTN Connectivity 3 Media Gateway (Transcoder) 3 WebRTC and WebRTC gateway 4 High Availability and Scalability on AWS 5 Floating IP Pattern for HA Between Active –Standby Stateful Servers 6 Load Balancing for Scalabili ty and HA with WebRTC and SIP 8 Cross Region DNS Based Load Balancing and Failover 11 Data Durability and HA with Persistent Storage 13 Dynamic Scaling with AWS Lambda Amazon Route 53 and AWS Auto Scaling 14 Highly Available WebRTC with Kinesis Video Streams 14 Highly Available SIP Trunking with Amazo n Chime Voice Connector 15 Best Practices from the Field 15 Create a SIP Overlay 15 Perform Deta iled Monitoring 17 Use DNS for Load Balancing and Floating IPs for Failover 18 Use Multiple Availability Zones 19 Keep Traffic within One Availability Zone and use EC2 Placement Groups 20 Use Enhanced Networking EC2 Instance Types 21 Security Considerations 21 Conclusion 22 Contributors 22 Document Revisions 23 Abstract Today many organizations are looking to reduce cost and attain scalability for realtime voice messaging and multimedia workloads This paper outlines the best practices for managing real time communication workloads on AWS and includes reference architectures to meet these requirements This paper serves as a guide for individuals familiar with real time communication on how to achieve high availability and scalability for these workloads Amazon Web Services RealTime Commun ication on AWS Page 1 Introduction Telecommunication applications using voice video and messaging as channels are a key requirement for many organizations and their end users These realtime communication (RTC) workloads have specific latency and availability requirements that can be met by following relevant design best practices In the past RTC workloads have been deployed in traditional on premises data centers with dedicated resources However due to a mature and burgeoning set of features RTC workloads can be deployed on Amazon Web Services (AWS) despite stringent service level requirements while also benefiting from scalability elasticity and high availability Today several custom ers are using AWS its partners and open source solutions to run RTC workloads with reduced cost faster agility the ability to go global in minutes and rich features from AWS services Customers leverage AWS features such as enhanced networking with a n Elastic Network Adapter (ENA) and the latest generation of Amazon Elastic Compute Cloud (EC2) instance s to benefit from data plane development kit (DPDK) single root I/O virtualization (SR IOV) huge pages NVM Express (NVMe) nonuniform memory access (NUMA) support as well as bare metal insta nces to meet RTC workload requirements These Instances offer n etwork bandwidth of up to 100 Gbps and commensurate packets per second delivering increased performance for network intensive applications For scaling Elastic Load Balancing offers Application Load Balancer which offer s WebS ocket support and Network Load Balancer that can handle millions of requests per second For network acceleration AWS Global Accelerator provides static IP addresses that act as a fixed entry point to your application endpoints in AWS It has support for static IP addresses for the load balancer For reduced latency cost and increased bandwidth throughput AWS Direct Connect establishes dedica ted network connection from on premises to AWS Highly available managed SIP trunking is provided by Amazon Chime Voice Connector Amazon Kinesis Video Streams with WebRTC easily stream real time two way media with high availability This pa per includes reference architectures that show how to set up RTC workloads on AWS and best practices to optimize the solutions to meet end user requirements while optimizing for the cloud The evolved packet core (EPC) is out of scope for this white paper but the best practices detailed can be applied to virtual network functions (VNFs) Amazon Web Services RealTime Communication on AWS Page 2 Fundamental Components of RTC Architecture In the telecommunications industry real time communication (RTC) commonly refer s to live media sessions between two endpoints with minimum latency These sessions could be related to: • A voice session between two parties (eg telephone system mobile VoIP) • Instant messaging (eg chatting IRC) • Live video session (eg videoconfer encing telepresence) Each of the preceding solutions has some components in common (eg components that provide authentication authorization and access control transcoding buffering and relay and so on ) and some components unique to the type of medi a transmitted (eg broadcast service messaging server and queues and so on ) This section focuses on defining a voice and video based RTC system and all of the related components illustrated in Figure 1 Figure 1: Essential architectural components for RTC Softswitch /PBX A softswitch or PBX is the brain of a voice telephone system and provides intelligence for establishing maintaining and routing of a voice call within or outside the enterprise Amazon Web Services RealTime Communication on AWS Page 3 by using different components All of the subscribers of the enterprise are required to register with the softswitch to receive or make a call An important functionality of the softswitch is to keep track of each subscriber and how to reach them by using the other components within the voice network Session Border Controller (SBC) A session border controller (SBC) sits at the edge of a voice network and keeps track of all incoming and outgoing traffic (both control and data planes ) One of the key responsibilit ies of an SBC is to protect the voice system from malicious use The SBC can be used to interconnect with session initiation protocol ( SIP) trunks for external connectivity Some SBCs also provide transcoding capabilities for converting CODECS from one format to another Finally most SBCs also provide NAT Traversal capabilities which aids in ensuring calls are established even across firewalled networks PSTN Connectivity Voice o ver IP (VoIP) solutions use PSTN Gateways and SIP Trunks to connect with legacy PSTN network s PSTN Gateway The p ublic switched telephone network (PSTN ) Gateway convert s the signaling (between SIP and SS7) and media ( between RTP and time division multiplexing [TDM ] using CODEC transcoding) PSTN Gateways always sit at the edge close to the PSTN network SIP Trunk In a SIP Trunk the enterprise does not terminate its calls onto a TDM (SS7 based) network but rather the flows between enterprise and te lco remain over IP Most of the SIP Trunks are established by using SBCs The enterprise must agree on the predefined security rules from telco such as allowing a certain range of IP addresses ports and so on Media Gateway ( Transcoder) A typical voice solution allows various types of CODECs Some of the common CODECs are G711 µ law for North America G711 A law for outside of North America G729 and G 722 When two devices that are using two different CODECs communicate with each other a media server translates the CODEC flow between the Amazon Web Services RealTime Communication on AWS Page 4 devices In other words a media gateway processes media and ensures that the end devices are able to communicate with each other WebRTC and WebRTC g ateway Web realtime communication (WebRTC ) allows you to establish a call from a web browser or request resources from the backend server by using API The technology is designed with cloud technology in mind and therefore provide s various API s which could be used to establish a call Since not all of the voice solution s (including SIP) support these API s the WebRTC gateway is required to translate API call s into SIP messages and vice versa Figure 2 shows a design pattern for a highly available WebRTC architecture The incoming traffic from WebRTC clients is balanced by an Amazon application load balancer with WebRTC running on EC2 instances that are part of an Auto Scal ing Group Figure 2: A basic topology of an RTC system for voice Another design pattern for SIP and RTP traffic is to use pairs of SBCs on Amazon EC2 in active passive mode across Availability Zones (Figure 3) Here an Elastic IP address can be dynamically moved between instances upon failure where DNS can not be used Amazon Web Services RealTime Communication on AWS Page 5 Figure 3: RTC architecture using Amazon EC2 in a VPC High Availability and Scalability on AWS Most providers of real time communications align with service levels that provide availability from 999% to 99999% Depending on the degree of high availability (HA) that you want you must take increasingly sophisticated measures along the full lifecycle of the application We re commend following these guidelines to achieve a robust degree of high availability : • Design the system to have no single point of failure Use automated monitoring failure detection and f ailover mechanisms for both stateless and stateful components Amazon Web Services RealTime Communication on AWS Page 6 o Single points of failure (SPOF) are commonly eliminated with an N+1 or 2N redundancy configuration where N+1 is achieved via load balancing among active–active nodes and 2N is achieved by a p air of nodes in active– standby configuration o AWS has several methods for achieving HA through both approaches such as through a scalable load balanced cluster or assuming an active–standby pair • Correctly instrument and test system availability • Prep are operating procedures for manual mechanisms to respond to mitigate and recover from the failure This section focus es on how to achieve no single point of failure using capabilities available on AWS Specifically this section describe s a subset of co re AWS capabilities and design patterns that allow you to build highly available real time communication applications on the platform Floating IP Pattern for HA Between Active–Standby Stateful Servers The Floating IP design pattern is a well known mechani sm to achieve automatic failover between an active and standby pair of hardware nodes (media servers) A static secondary virtual IP address is assigned to the active node Continuous monitoring between the active and standby node s detect s failure I f the active node fails the monitoring script assigns the virtual IP to the ready standby node and the standby node takes over the primary active function In this way the virtual IP floats between the active and standby node Applicability in RTC solutions It is not always possible to have multiple active instances of the same component in service such as an active –active cluster of N nodes An active –standby configuration provides the best mechanism for HA For example the stateful components in an RTC solution such as the media server or conferencing server or even an SBC or database server are well suited for an active –standby setup An SBC or media server has several long running sessions or channels active at a given time and in the case of the SBC active instance failing the endpoints can reconnect to the standby node without any client side configuration due to the floating IP Amazon Web Services RealTime Communication on AWS Page 7 Implementation on AWS You can implement this pattern on AWS using core capabilities in Amazon Elastic Compute Cloud ( Amazon EC2) Amazon EC2 API Elastic IP addresses and support on Amazon EC2 for secondary private IP addresses 1 Launch two EC2 instances to assume the role s of primary and secondary nodes where the primary is assumed to be in active state by default 2 Assign an additional secondary private IP address to the primary EC2 instance 3 An Elastic IP address which is similar to a virtual IP (VIP) is associated with the secondary private address This secondary private address is the address that is used by exte rnal endpoints to access the application 4 Some OS configuration is required to make the secondary IP address added as an alias to the primary network interface 5 The application must bind to this Elastic IP address In the case of Asterisk software you can configure the binding through advanced Asterisk SIP settings 6 Run a monitoring script —custom KeepAlive on Linux Corosync and so on —on each node to monitor the state of the peer node In the event that the current active node fails the peer detects th is failure and invokes the Amazon EC2 API to reassign the secondary private IP address to itself 7 Therefore the application that was listening on the VIP associated with the secondary private IP address becomes available to endpoints via the standby node Figure 4: Failover between stateful EC2 instances using Elastic IP address Amazon Web Services RealTime Communication on AWS Page 8 Benefits This approach is a reliable low budget solution that protects against failures at the EC2 instance infrastructure or application level Limitations and extensibility This design pattern is typically limited to within a single Availability Zone It can be implemented across two Availability Zones but with a variation In this case the Floating Elastic IP address is reassociated between active and standby node in different Availability Zone s via the reassociate elastic IP address API available In the failover implementation shown in Figure 4 calls in progress are dropped and endpoints must reconne ct It is possible to extend this implementation with replication of underlying session data to provide seamless failover of sessions or media continuity as well Load Balancing for Scalability and HA with WebRTC and SIP Load balancing a cluster of active instances based on predefined rules such as round robin affinity or latency and so on is a design pattern widely popularized by the stateless nature of HTTP request s In fact load balancing is a viable option in case of many RTC application components The load balancer acts as the reverse proxy or entry point for requests to the desired application which itself is configured to run in multiple active nodes simulta neously At any given point in time the load balancer directs a user request to one of the active nodes in the defined cluster Load balancers perform a health check against the nodes in their target cluster and do not send an incoming request to a node t hat fails the health check Therefore a fundamental degree of high availability is achieved by load balancing Also because a load balance r performs active and passive health check s against all cluster nodes in sub second intervals the time for failover is near instantaneous The decision on which node to direct is based on system rules defined in the load balancer including: • Round robin • Session or IP affinity which ensures that multiple requests within a session or from the same IP are sent to the same node in the cluster Amazon Web Services RealTime Communication on AWS Page 9 • Latency based • Load based Applicability in RTC Architectures The WebRTC protocol makes it possible for WebRTC Gateways to be easily load balanced via an HTTP based load balancer such as Elastic Load Balanc ing Application Load Bala ncer or Network Load Balancer With most SIP implementations relying on transport over both TCP and UDP network or connection level load balancing with support for both TCP and UDP based traffic is needed Load Balancing on AWS for WebRTC using Applicat ion Load Balancer and Auto Scaling In the case of WebRTC based communications Elastic Load Balanc ing provides a fully managed highly available and scalable load balancer to serve as the entry point for requests which are then directed to a target cluster of EC2 instances associated with Elastic Load Balancing Also because WebRTC requests are stateless you can use Amazon EC2 Auto Scaling to provide fully automated and controllable scalability elasticity and high availability The Application Load Balancer provides a fully managed load balancing service that is highly available using multiple Availability Zones and scalable This supports the load balancing of WebSoc ket requests that handle the signaling for WebRTC applications and bidirectional communication between the client and server using a long running TCP connection The Application Load Balancer also supports content based routing and sticky sessions routing requests from the same client to the same target using load balancer generated cookies If you enable sticky sessions the same target receives the request and can use the cookie to recover the session context Figure 5 shows the target topology Amazon Web Services RealTime Communication on AWS Page 10 Figure 5: WebRTC scalability and high availability architecture Implementation for SIP using Network Load Balancer or AWS Marketplace Product In the case of SIP based communications the connections are made over TCP or UDP with the majority of RTC applications using UDP If SIP/TCP is the s ignal protocol of choice then it is feasible to use the Network Load Balancer for fully managed highly available scalable and performan ce load balancing A Network Load Balancer operates at the connection level (Layer 4) routing connections to targets such as Amazon EC2 instances containers and IP addresses based on IP protoco l data Ideal for TCP or UDP traffic load balancing network load balanc ing is capable of handling millions of requests per second while maintaining ultra low latencies It is integrated with other popular AWS services such as AWS Auto Scaling Amazon Elastic Container Service ( Amazon ECS) Amazon Elastic Kubernetes Service (Amazon EKS) and A WS CloudFormation If SIP connections are initiated another option is to use AWS Marketplace commercial offtheshelf software (COTS) The AWS Marketplace offers many products that can handle UDP and other types of layer 4 connection load balancing These COTS typically include support for high availability and are commonly integrated with features Amazon Web Services RealTime Communication on AWS Page 11 such as AWS Auto Scaling to further enhance availability and scalabil ity Figure 6 shows the target topology: Figure 6: SIPbased RTC s calability with AWS Marketplace product Cross Region DNS Based Load Balancing and Failover Amazon Route 53 provi des a global DNS service that can be used as a public or private endpoint for RTC clients to register and connect with media applications With Amazon Route 53 DNS health checks can be configured to route traffic to healthy endpoints or to independently m onitor the health of your application The Amazon Route 53 Traffic Flow feature makes it easy for you to manage traffic globally through a variety of routing types including latency based routing geo DNS geoproximity and weighted round robin—all of whi ch can be combined with DNS Failover to enable a variety of low latency fault tolerant architectures The Amazon Route 53 Traffic Flow simple visual editor allows you to manage how your end users are routed to your application’s endpoints —whether in a sin gle AWS Region or distributed around the globe Amazon Web Services RealTime Communication on AWS Page 12 In the case of global deployments the latency based routing policy in Route 53 is especially useful to direct customers to the nearest point of presence for a media server to improve the quality of service associated with real time media exchanges Note that to enforce a failover to a new DNS address clien t caches must be flushed Also DNS changes may have a lag as they are propagated across global DNS servers You can manage the refresh interval for DNS lookups with t he Time to Live attribute This attribute is configurable at the time of setting up DNS p olicies To reach global users quickly or to meet the requirements of using a single public IP AWS Global Accelerator can also be used for cross region failover AWS Global Accelerator is a networking service that improves availability and performance for applications with both local and global reach AWS Global Accelerator provides static IP addresses that act as a fixed entry point to your application endpoints such as your Application Load Balancers Network Load Balancers or Amazon EC2 instances in single or multiple AWS Regions It uses the AWS global network to optimize the path from your users to your applications improving performance such as the latency of your TCP and UDP traffic AWS Global Accelerator continually monitors the health of your application endpoints and automatically redirects traffic to the nearest healthy endpoints in the event of current endpoints turn ing unhealthy For additional security requirements Accelerated Site toSite VPN uses AWS Global Accelerator to improve the performance of VPN connections by intelligently routing traffic through the AWS Global Network and AWS edge locations Amazon Web Services RealTime Communication on AWS Page 13 Figure 7: Interregion high availability design using AWS Global Accelerator or Amazon Route 53 Data Durability and HA with Persistent Storage Most RTC applications rely on persistent storage to store and access data for authentication authorization accounting (session data call detail records etc) operational monitoring and logging In a traditional data center ensuring high availability and durability for the persistent storage components (databases file systems and so on) typically requires heavy lifting via the setup of a SAN RAID design and processes for backup restore and failo ver processing The AWS Cloud greatly simplifies and enhances traditional data center practices around data durability and availability For object storage and file storage AWS services like Amazon Simple Storage Service (Amazon S3) and Amazon Elastic Fil e System (Amazon EFS) provide managed high availability and scalability Amazon S3 has a data durability of 11 nines For transactional data storage customers have the option to take advantage of the fully managed Amazon Relational Database Service (Amazo n RDS) that supports Amazon Aurora PostgreSQL MySQL MariaDB Oracle and Microsoft SQL Server with high availability deployments For the registrar function subscriber profile or accounting Amazon Web Services RealTime Communication on AWS Page 14 records storage (eg CDRs) the Amazon RDS provides a fault tolerant highly available and scalable option Dynamic Scaling with AWS Lambda Amazon Route 53 and AWS Auto Scaling AWS allows the chaining of features and the ability to incorporate custom serverless functions as a service based on infrastructure even ts One such design pattern that has many versatile uses in RTC applications is the combination of auto matic scaling lifecycle hooks with Amazon Cloud Watch Events Amazon Route 53 and AWS Lambda functions AWS Lambda functions can embed any action or logic Figure 8 demonstrate s how these features chained together can enhance system reliability and scalability with automation Figure 8: Auto matic scaling with dynamic u pdates to Amazon Route 53 Highly Available WebRTC with Kinesis Video Streams Amazon Kinesis Video Streams offers realtime media streaming via WebRTC allowing users to c apture process and store media streams for playback analytics and machine learning These streams are highly available scalable and compliant with WebRTC standards Amazon Kinesis Video Streams include a WebRTC signaling Amazon Web Services RealTime Communication on AWS Page 15 endpoint for fast peer discovery and secure connection establi shment It includes managed Session Traversal Utilities for NAT (STUN) and Traversal Using Relays around NAT (TURN) end points for real time exchange of media between peers It also includes a free open source SDK that directly integrates with camera firmw are to enable secure communication with Kinesis Video Streams end points allowing for peer discovery and media streaming Finally it provides client libraries for Android iOS and JavaScript that allow WebRTC compliant mobile and web players to securely discover and connect with a camera device for media streaming and two way communication Highly Available SIP Trunking with Amazon Chime Voice Connector Amazon Chime Voice Connector delivers a pay asyougo SIP trunking service that enables companies to m ake and/or receive secure and inexpensive phone calls with their phone systems Amazon Chime Voice Connector is a low cost alternative to service provider SIP trunks or Integrated Services Digital Network (ISDN) Primary Rate Interfaces (PRIs) Customers ha ve the option to enable inbound calling outbound calling or both The service leverages the AWS network to deliver a highly available calling experience across multiple AWS Regions You can stream audio from SIP trunking telephone calls or forwarded SIP based media recording (SIPREC) feeds to Amazon Kinesis Video Streams to gain insights from business calls in real time You can quickly build applications for audio analytics through integration with Amazon Transcribe and other common machine learning lib raries Best Practices from the Field This section aims to summarize the best practices that have been implemented by some of largest and most successful AWS customers that run large real time Session Initiation Protocol (SIP) workloads AWS customers want ing to run their own SIP infrastructure in the public cloud would find these best practices valuable as they can help increase the reliability and resiliency of the system in case of different kinds of failures Although some of these best practices are SI P specific most of them are applicable to any real time communication application running on AWS Create a SIP Overlay AWS has a robust scalable and redundant network backbone that provides connectivity between different Regions When a network event such as a fiber cut degrades an Amazon Web Services RealTime Communication on AWS Page 16 AWS backbone link traffic is quickly failed over to redundant paths using network level routing protocols such as BGP This network level traffic engineering is a black box to AWS customers and most do not even notice these failover events However customers that run real time workloads such as voice high quality video and low latency messaging do sometimes notice these events So how can an AWS customer implement their own traffic engineering on top of what is provide d by AWS at the network level? The solution is deploying SIP infrastructure at many different AWS Regions As part of the call control features SIP also provides the ability to route calls through specific SIP proxies Figure 9: Using SIP routing to override network routing In Figure 9 SIP infrastructure (represented by green dots) is running in all four US Regions The blue lines represent a fictional depiction of the AWS backbone If no SIP routing is implemented a call originating in the US west coast and destined for the US east coast goes over the backbone link that is directly connecting the Oregon and Virginia regions The diagram shows how a customer might override the network level routing and make the same call between Oregon and Virginia route d through California using SIP routing This type of SIP traffic engineering can be implemented using SIP proxies and media gateways based on network metrics such as SIP retransmissions and customer specific business preferences Amazon Web Services RealTime Communication on AWS Page 17 Perform Detailed Monitoring End users of real time voice and video applications expect the same level of performance as they achieve with traditional telephony services So when they experience issues with an application it ends up hurting the provider’s reputation To be proactive rather than reactive it is imperative that detailed monitoring be deployed at every part of the system that serves end users Figure 10: Using SIPp to Monitor VoIP Infrastructure Many open source tools such as iPerf or SIPp and VOIPMonitor are available that can be used to monitor SIP/RTP traffic In the preceding example nodes running SIPp in client and server modes are measuring SIP metrics such as Successful Calls and SIP Retransmits between all four US AWS Regions These metrics can then be exported into Amazon CloudWatch using a custom script Using CloudWatch customers can create alarms on these custom metrics based on a certain threshold value Automatic or manual remediation acti ons can then be taken based on the state of these CloudWatch alarms For customers not wanting to allocate engineering resources needed to develop and maintain a custom monitoring system many good VoIP monitoring solutions are Amazon Web Services RealTime Communicat ion on AWS Page 18 available on the market such as ThousandEyes An example of a remediation action is changing the SIP routing based on increased SIP retransmits Use DNS for Load Balancing and Floating IPs for Failover IP telephony clients that support DNS SRV capability can efficiently use the redundancy built into the infrastructure by load balancing clients to different SBCs/PBXs Figure 11: Using DNS SRV records to load balance SIP clients Figure 11 shows how customers can use the SRV records to load balance SIP traffic Any IP telephony client that supports the SRV standard will look for the sip_<transport protocol> prefix in an SRV type DNS record In the example the answer section from DNS conta ins both of the PBXs running in different AWS Availability Zones However in addition to the endpoint URIs the SRV record contains three additional pieces of information: • The first number is the Priority (1 in the example above ) A lower priority is preferred over higher • The second number is the Weight (10 in the example above ) Amazon Web Services RealTime Communication on AWS Page 19 • And the third number is the Port to be used (5060 ) Since the priority is the same (1) for both PBXs servers the clients use the w eight to load balance between the two PBXs In this case since the weights are the same SIP traffic should be load balanced equally between the two PBXs DNS can be a good solution for client load balancing but what about implementing failover by changing/updating DNS ‘A’ records? This method is d iscouraged because of inconsistency found in DNS caching behavior within the client and intermediate nodes A better approach for intra AZ failover between a cluster of SIP nodes is to use the EC2 IP reassignment where an impaired host’s IP address is inst antly reassigned to a healthy host by using the EC2 API Paired with a detailed monitoring and health check solution IP reassignment of a failed node ensures that traffic is moved over to a healthy host in a timely manner that minimizes end user disruptio n Use Multiple Availability Zones Each AWS Region is subdivided into separate Availability Zones Each Availability Zone has its own power cooling and network connectivity and thus forms an isolated failure domain Within the constructs of AWS it is a lways encouraged that customers run their workloads in more than one Availability Zone This ensures that customer applications can withstand even a complete Availability Zone failure a very rare event in itself This recommendation stands for real time SIP infrastructure as well Figure 12: Handling Availability Zone failure Let’s assume that a catastrophic event (such as Category 5 hurricane) causes a complete Availability Zone outage in the us east1 region With the infrastructure Amazon Web Services RealTime Communication on AWS Page 20 running as shown in the diagram all SIP clients that were originally registered with the node s in the failed Availability Zone should re register with the SIP nodes running in Availability Zone #2 (Test this behavior with your SIP clients/phones to make sure it is supported ) Although the active SIP calls at the time of the Availability Zone outage are lost any new calls are routed through Availability Zone #2 To summarize DNS SRV records should point the client to multiple ‘A’ records one in each Availability Zone Each of those ‘A’ records should in turn point to multiple IP addresses of SBCs/PBXs in that Availability Zone providing both intra and inter AZ resiliency Both intra and inter AZ failover can be implemented by using IP reassignment if the IPs are public Private IPs however cannot be reassigned across Availability Zone s If a customer is using private IP addressing then they would have to rely on the SIP clients re registering with the backup SBC/PBX for inter AZ failover Keep Traffic within One Availability Zone and use EC2 Placement Groups Also known as Availability Zone Affinity this best practice also applies to the rare event of a complete Availability Zone failure It is recommended that you eliminate any cross AZ traffic such that any SIP or RTP traffic that enters one Availability Zone should remain in that Availab ility Zone until it exits the Region Figure 13: Availability Zone Affinity (at most 50% of active calls are lost) Figure 13 shows a simplified architecture that uses Availability Zone Affinity The comparative advantage of this approach becomes clear if one accounts for the effects of a complete Availability Zone outage As depicted in the diagram if Availability Zone Amazon Web Services RealTime Communication on AWS Page 21 #2 is lost 50% of active calls are affected at most (assuming equal load balancing between Availability Zone s) Had Availability Zone Affinity not been implemented some calls would flow between Availability Zone s in one Region and a failure would most likely affect more than 50% of active calls Furthermore to minimize latency for traffic we also recommend that you consider using EC2 placement groups within each Availability Zone Instances launched within the same EC2 placement group have higher bandwidth and reduced latency as EC2 ensures netwo rk proximity of these instances relative to each other Use Enhanced Networking EC2 Instance Types Choosing the right instance type on Amazon EC2 ensure s system reliability as well as efficient usage of infrastructure EC2 provides a wide selection of inst ance types optimized to fit different use cases Instance types comprise varying combinations of CPU memory storage and networking capacity and give you the flexibility to choose the appropriate mix of resources for your applications These enhanced net working instance types ensure that the SIP workloads running on them have access to consistent bandwidth and comparatively lower aggregate latency A recent addition to Amazon EC2 is the availability of the Elastic Network Adapter (ENA) that provides up to 100 Gbps of bandwidth The latest catalog of EC2 instance types and associated features can be found on the EC2 instance types page For most customers the latest generation of compute optimized instances should provide the best value for the cost For example the C5 N supports the new Elastic Network Adapter with bandwidth up to 100 Gbps with millions of packets per second (PPS) Most re altime applications would also benefit from using the Intel Data Plane Developer Kit (DPDK) which can greatly boost network packet processing However it is always a best practice to benchmark the various EC2 instance types according to your requirements to see which instance type works best for you Benchmarking also enable s you to find other configuration parameters such as the maximum number of calls a certain instance type can process at a time Security Considerations RTC application components typically run directly on internet facing Amazon EC2 instances In addition to TCP flows use protocols like UDP and SIP In these cases AWS Shield Standard protects Amazon EC2 instance s from common infrastructure layer (Layer 3 and 4) DDoS attacks such as UDP reflection attacks DNS reflection Amazon Web Services RealTime Communication on AWS Page 22 NTP reflection SSDP reflection and so on AWS Shield Standard uses various techniques like priority based traffic shaping that are automatically engaged when a welldefined DDoS attack signature is detected AWS also provides advanced protection against large and sophisticated DDoS attacks for these applications by enabling AWS Sh ield Advanced on Elastic I P addresses AWS Shield Advanced provides enhanced DDoS detection that automatically detects the type of AWS resource and size of EC2 instance and applies appropriate predefined mitigations with protections against SYN or UDP floo ds With AWS Shield Advanced customers can also create their own custom mitigation profiles by engaging the 24 x7 AWS DDoS Response Team (DRT) AWS Shield Advanced also ensures that during a DDoS attack all of your Amazon VPC Network Access Control Lists (ACLs) are automatically enforced at the border of the AWS network providing you with access to additional bandwidth and scrubbing capacity to mitigate large volumetric DDoS attacks Conclusion Real time communication (RTC) workloads can be deployed on Am azon Web Services (AWS) to attain scalability elasticity and high availability while meeting the key requirements Today several customers are using AWS its partners and open source solutions to run RTC workloads with reduced cost and faster agility a s well as a reduced global footprint The reference architectures and best practices provided in this white paper can help customers successfully set up RTC workloads on AWS and optimize the solutions to meet end user requirements while optimizing for the cloud Contributors The following individuals and organizations contributed to this document: • Ahmad Khan Senior Solutions Archi tect Amazon Web Services • Tipu Qureshi Principal Engineer AWS Support Amazon Web Services • Hasan Khan Senior Technical Acco unt Manager Amazon Web Services • Shoma Chakravarty WW Technical Leader Telecom Amazon Web Services Amazon Web Services RealTime Communication on AWS Page 23 Document Revisions Date Description February 2020 Updated for latest services and features October 2018 First publication
General
Criminal_Justice_Information_Service_Compliance_on_AWS
ArchivedCriminal J ustice Information Service Compliance on AWS (This document is part of the CJIS Workbook package which also includes CJIS Security Policy Requirements CJIS Security Policy Template and CJIS Security Policy Workbook ) March 2017 This paper has been archived For the latest compliance content see https://awsamazoncom/compliance/resources/ Archived © 201 7 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessmen t of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitme nts conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS a nd its customers Archived Contents Introduction 1 What is Criminal Justice Information? 1 What is the CJIS Security Policy 2 CJIS Security Addendums (Agreements) 2 AWS Approach on CJIS 3 CJIS and relationship to FedRAMP 3 AWS Shared Responsibility Model 4 Service Categories 4 AWS Regions Availability Zones and Endpoints 6 Security & Compliance OF the Cloud 7 Security & Compliance IN the Cloud 8 Creating a CJIS Environment on AWS 9 Auditing and Accountability 10 Identification and Authentication 11 Configuration Management 12 Media Protection & Information Integrity 13 System and Communication Protection and Information Integrity 14 Conclusion 15 Further Reading 16 Document Revisions 17 Archived Abstract There is a long and successful track record of AWS customers using the AWS cloud for a wide range of sensitive federal and state government workloads including Criminal Justice Information (CJI) data Law enforcement customers (and partners who manage CJI) are taking advantage of AWS services to dramatically improve the security and protection of CJI data using the advanced security services and features of AWS such as a ctivity logging ( AWS CloudTrail ) encryption of data in motion and at rest (Amazon S3’s Server Side Encryption with the option to bring your own key) comprehensive key management and protection ( AWS Key Management Service and AWS CloudHSM ) along with integrated permission management (IAM federated identity management multi factor authentication) To enable this AWS complies with Criminal Justice Information Services Division (CJIS) Security Policy requirements where applicable such as providing states with fingerprint cards for GovCloud administrators and signing CJIS security addendum agreements with our customers ArchivedAmazon Web Services – CJIS Compliance on AWS Page 1 Introduction Amazon Web Services (AWS) delivers a scalable cloud computing platform with high availability and dependability providing the tools that enable customers to run a wide range of applications Because AWS designed their cloud implementation with security in mind you can use AWS services to satisfy a wide range of regulatory requirements including the Criminal Justice Information Services (CJIS) Security Policy The CJIS Security Policy provides Criminal Justice Agencies (CJA) and Noncriminal Justice Agencies (NCJA) with a minimum set of security requirements for access to FBI CJIS systems and information for the protection and saf eguarding of CJI The essential premise of the CJIS Security Policy is to provide the appropriate controls to protect CJI from creation through dissemination whether at rest or in transit This minimum standard of security requirements ensures continuity of information protection What is Criminal Justice Information? Criminal Justice Information (CJI) refers to the FBI CJIS provided data necessary for law enforcement agencies to perform their mission and enforce the laws such as biometric identity his tory person organization property and case/incident history data CJI also refers to data necessary for civil agencies to perform their mission including data used to make hiring decisions CJIS Security Policy 52 A 3 defines CJI as: Criminal Justic e Information is the abstract term used to refer to all of the FBI CJIS provided data necessary for law enforcement agencies to perform their mission and enforce the laws including but not limited to: biometric identity history person organization property and case/incident history data In addition CJI refers to the FBI CJIS provided data necessary for civil agencies to perform their mission; including but not limited to data used to make hiring decisions — CJIS Security Policy 52 A 3 Law enforcement must be able to access CJI wherever and whenever is necessary in a timely and secure manner in order to reduce and stop crime ArchivedAmazon Web Services – CJIS Compliance on AWS Page 2 What is the CJIS Security Policy The intent of the CJIS Security Policy is to ensure the protection of the CJI until the information is 1) released to the public via authorized dissemination (eg within a court system presented in crime reports data or released in the interest of public safety) and 2) purged or destroyed in accordance with applicable record retention rules The Criminal Justice Information Services Division (CJIS) is a division of the United States Federal Bureau of Investigation (FBI) and is responsible for publishing the Criminal Justice Information Services (CJIS) Security Policy which is currently on version 55 The CJIS Security Policy outlines a minimum set of security requirements that create security controls for managing and maintaining Criminal Justice Information (CJI) data The CJIS Advisory Policy Board (APB) manages the policy with national oversight from the CJIS division of the FBI There is no centralized adjudication body for determining what is or isn’t compliant with the Security Policy in the way that FedRAMP has standardized security assessments across the federal government That means vendors/CS Ps wanting to provide CJIS compliant solutions to multiple law enforcement agencies must gain formal CJIS authorizations from city county or state level authority CJIS Security Addendums (Agreements) Unlike many of the compliance frameworks that AWS supports there is no central CJIS authorization body no accredited pool of independent assessors nor a standardized assessment approach to determining whether a particular solution is considered "CJIS compliant" Simply put a standardized "CJIS compliant” solution which works across all law enforcement agencies does not exist It is often falsely misunderstood and miscommunicated that a cloud service provider can be “CJIS certified” It is imperative to understand that delivering a CJIS compliant solution relies on a Shared Responsibility Model between the cloud service provider and the CJA Each law enforcement organization granting CJIS authorizations interprets solutions according to their own risk acceptance standard of what can be construed as compliant within the CJIS requirements Authorizations from one state do not necessarily find reciprocity within another state (or even necessarily ArchivedAmazon Web Services – CJIS Compliance on AWS Page 3 within the same state) Providers must submit solutions for review with each agency authorizing official(s) possibly to include duplicate fingerprint and background checks and other state/jurisdiction specific requirements Each authorization is an agreement with that particular organization; something that must be repeated locally at each law enforcement agency Thu s be wary of vendors that may represent themselves as having a nationally recognized or 50 state compliant CJIS service AWS Approach on CJIS AWS has evaluated the 13 Policy Areas along with the 131 security requirements and has determined that 10 controls can be directly inherited from AWS both AWS and the CJIS customer share 78 and 43 are customer specific controls AWS has documented these requirements with a detailed workb ook which can be downloaded at CJIS Security Policy Workbook The AWS CJIS Security Policy Workbook outlines the shared responsibility between AWS and the CJIS customer on how AWS directly supports the requirements within our FedRAMP accreditation (Note: the CJIS Advisory Policy Board (APB) also has mapping for CJIS to NIST 800 53rev4 requirements which are the base controls for Federal Risk and Authorization Management Program (FedRAMP) dated 6/1/2016) This document and our approach h as been reviewed by the CJIS APB subcommittee chairmen partners in the CJIS space with favorable support on the efficacy of our workbook and approach CJIS and relationship to FedRAMP All Federal Agencies including Criminal Justice Agencies (CJA’s) may leverage the AWS package completed as part of the Federal Risk and Management Program (FedRAMP) FedRAMP is a government wide program that provides a standardized approach to security assessment authorization and continuous monitoring for cloud service providers (CSP’s ) This approach utilizes a “do once use many times” model to ensure cloud based services have adequate information security eliminate duplication of effort reduce risk management costs and accelerate cloud adoption FedRAMP conforms to the National Institute of Science & Technology (NIST) 800 Series Publications to verify that ArchivedAmazon Web Services – CJIS Compliance on AWS Page 4 all authorizations are compliant with the Federal Information Security Management Act (FISMA) The CJIS Security Policy integrates presidential directives federal laws FBI directives the criminal justice community’s APB decisions along with nationally recognized guidance from the National Institute of Standards and Technology (NIST) and the National Crime Prevention and Privacy Compact Council (Compact Council ) AWS Shared Responsibility Model AWS offers a variety of different infrastructure and platform services For the purpose of understanding security and shared responsibility of these AWS services consider the following three main categories: • Infrastructure • Platform • Software Each category comes with a slightly different security ownership model based on how you interact and access the functionality The main focus of this document the CJIS Security Policy Template document the CJIS Security Policy Requirements document and the CJIS Security Policy Workbook is on the Infrastructure services The other categories are highlighted for awareness and can also be addressed by AWS services as outlined in the following sections Service Categories Infrastructure Services This category includes compute services such as Amazon EC2 and related services such as Amazon Elastic Block Store (Amazon EBS) AWS Auto Scaling and Amazon Virtual Private Cloud (Amazon VPC) With these services you can architect and build a cloud infrastructure using technologies similar to and largely compatible with on premises solutions You control the operating ArchivedAmazon Web Services – CJIS Compliance on AWS Page 5 system and you configure and operate any identity management system that provides access to the user layer of the virtualization stack Platform as a Service Services in this category typically run on separate Amazon EC2 or other infrastructure instances but sometimes you don’t manage the operating system or the platform layer AWS provides service for these application “c ontainers” You are responsible for setting up and managing network controls such as firewall rules and the underlying platform – eg level identity and access management separately from Identity and Access Management ( IAM ) Examples of container servic es include Amazon Relational Database Services ArchivedAmazon Web Services – CJIS Compliance on AWS Page 6 (Amazon RDS) Amazon Elastic Map Reduce (Amazon EMR) and AWS Elastic Beanstalk Software as a Service This category includes high level storage database and messaging services such as Amazon Simple Storage Service (Amazon S3) Amazon Glacier Amazon DynamoDB Amazon Simple Queuing Service (Amazon SQS) and Amazon Simple Email Service (Amazon SES) These services abstract the platform or management layer on which you can build and operate cloud applications You access the endpoints of these abstracted services using AWS APIs and AWS manages the underlying service components or the operating system on which they reside You share the underlying infrastructure and abstracted services provide a multi tenant platform which isolates your data in a secure fashion and provides for powerful integration with IAM AWS Regions Availability Zones and Endpoints AWS has datacenters in multiple locations around the world The recommended region for CJIS workloads is t he AWS GovCloud region Regions are designed with availability in mind and consist of at least two often more Availability Zones Availability Zones are designed for fault isolation They are connected to multiple Internet Service Providers (ISPs) and different power grids The y are interconnected using high speed links so applications ArchivedAmazon Web Services – CJIS Compliance on AWS Page 7 can rely on Local Area Network (LAN0) connectivity for communication between Availability Zones within the same region You are responsible for carefully selecting the Availability Zone(s) where your systems will reside Systems can span multiple Availability Zones and we recommend that you design your systems to survive temporary or prolonged failure of an Availability Zone in the case of a disaster AWS provides web access to services through t he AWS Management Console AWS provides programmatic access to services through Application Programming Interfaces (APLs) and command line interfaces (CLIs) Service endpoints which are managed by AWS provide management (“backplane”) access Security & C ompliance OF the Cloud One of the tenets within the CJIS Security Policy is the risk verse realism approach of applying risk based approaches that can be used to mitigate risks based on Every “shall” statement contained within the CJIS Security Policy has been scrutinized for risk versus the reality of resource constraints and realworld application The purpose of the CJIS Security Policy is to establish the minimum security requirements; therefore individual agencies are encouraged to implement additiona l controls to address agency specific risks Each agency faces risk unique to that agency It is quite possible that several agencies could encounter the same type of risk however depending on resources would mitigate that risk differently In that light a risk based approach can be used when implementing requirements” — 23 Risk Versus Realism In order to manage risk and security within the cloud a variety of processes and guidelines have been created to differentiate between the security of a cloud service provider and the responsibilities of a customer consuming the cloud services One of the primary concepts that have emerged is the increased understanding and documentation of shared inherited or dual (AWS & Customer) security controls in a cloud env ironment A common question for ArchivedAmazon Web Services – CJIS Compliance on AWS Page 8 AWS is: “how does leveraging AWS make my security and compliance activities easier?” This question can be answered by demonstrating the security controls that are met by approaching the AWS Cloud in two distinct ways: first reviewing compliance of the AWS Infrastructure gives an idea of “Security & Compliance OF the cloud”; and second reviewing the security of workloads running on top of the AWS infrastructure gives an idea of “Security & Compliance IN the cloud” AWS opera tes manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the AWS services operate Customers running workloads on the AWS infrastructure depend on AWS for a nu mber of security controls AWS has several additional whitepapers which provide additional information to assist AWS customers with integrating AWS into their existing security frameworks and to help design and execute security assessments of an organizat ion’s use of AWS For more information see the AWS Compliance Whitepapers Security & Compliance IN the Cloud Security & Compliance IN the Cloud refers to how the customer manages the secur ity of their workloads through the use of various applications and architecture (virtual private clouds security groups operating systems databases authentication etc) • Cross service security controls – are security controls which a customer needs to implement across all services within their AWS customer instance While each customer’s use of AWS services may vary along with their own risk posture and security control interpretation cross service controls will need to be documented within the customer’s use of AWS services Example: Multi factor authentication can be used to help secure Identity and Access Management (IAM) users groups and roles within the customer environment in order to meet CJIS Access Management Authentication and Authorization requirements for the particular agency or CJIS organization • Service Specific security controls – are service specific security implementation such as the Amazon S3 security access permission ArchivedAmazon Web Services – CJIS Compliance on AWS Page 9 settings l ogging event notification and/or encryption A customer may need to document service specific controls within their use of Amazon S3 in order to meet a specific security control objective related to criminal justice data and/or investigative related reco rds Example: Server Side Encryption (SSE) can be enabled for all objects classified as CJI and/or directory information related to the CJIS security • Optimized Network Operating Systems (OS) and Application Controls – controls a customer may need to docu ment in order to meet specific control elements related to the use of an Operating System and/or application deployed within AWS Example: Customer Server Secure hardening rules or an optimized private Amazon Machine Images (AMI) in order to meet specific security controls within Change Management Creating a CJIS Environment on AWS AWS has several partner solutions that collect transfer manage as well as share digital evidence (eg video and audio files) related to law enforcement interactions AWS is also working with several partners who are delivering electronic warrant services as well as other unique CJIS law enforcement applications and services directly or indirectly to CJIS customers a s illustrated above CJIS Agency/Customer CJIS Technology ArchivedAmazon Web Services – CJIS Compliance on AWS Page 10 Similar to other AWS compliance frameworks the CJIS Security Policy takes advantage of the shared responsibility model between you and AWS Using a cloud se rvice which aligns to CJIS security requirements doesn't mean that your environment automatically adheres to applicable CJIS requirements It’s up to you (or your AWS partner/systems integrator) to architect a solution that meets the applicable CJIS requirements outlined in the CJIS Security Policy One advantage of using AWS for CJIS workloads is that you inherit a significant portion of the security control implementation from AWS and the partner solution that address and meet CJIS security policy elem ents You and your AWS customers and partners should enable several applicable security features functions and utilize leading practices in order to create an AWS CJIS compliant environment within their use of AWS As such t he following section provides a high level overview of services and tools you and your partners should consider as part of your AWS CJIS implementation Auditing and Accountability (Ref CJIS Policy Area 4) • AWS CloudTrail – A service that records AWS API calls for your account and del ivers log files to you AWS CloudTrail logs all user activity within your AWS account You can see who performed what actions on each of your AWS resources The AWS API call history produced by AWS CloudTrail enables security analysis resource change tracking and compliance auditing For more information go here • Amazon CloudWatch – A service that monitors AWS cloud resources and the applications that you run on AWS You can use AWS CloudWatch to monitor your AWS resources in near real time including Amazon EC2 instances Amazon EBS volumes AWS Elastic Load Balancers and Amazon RDS DB instances For more information go here • AWS Trusted Advisor – This online resource provides best practices (or checks) in fo ur categories: cost optimization security fault tolerance and performance improvement For each check you can review a detailed description of the recommended best practice a set of alert criteria guidelines for action and a list of useful resources on the topic For more information go here ArchivedAmazon Web Services – CJIS Compliance on AWS Page 11 • Amazon SNS – You can use this service to send email or SMS based notifications to administrative and security staff Within an AWS account you can create Amazon SNS topics to which applications and AWS CloudFormation deployments can publish These push notifications can automatically be sent to individuals or groups within the organization who need to be notified of Amazon CloudWatch alarms resource deployments or other activity published by applications to Amazon SNS For more information go here Identification and Authentication (Ref CJIS Policy Area 6) • Access Control – IAM is central to securely controlling access to AWS resources Administrators can create users groups and roles with specific access policies to control the actions that users and applications can perform through the AWS Management Console or AWS API Federation allows IAM rol es to be mapped to permissions from central directory services • AWS Identity and Access Management ( IAM) configuration – Creating user groups and assignment of rights including creation of groups for internal auditors an IAM super user and application administrative groups segregated by functionality (eg database and Unix administrators) For more information go here • AWS Multi Factor Authentication (MFA) – A simple best practice that adds an extra l ayer of protection on top of your user name and password With MFA enabled when a user signs in to an AWS website they will be prompted for their user name and password (the first factor —what they know) as well as for an authentication code from their A WS MFA device (the second factor —what they have) For more information go here • AWS Account Password Policy Settings – Within the IAM console under account settings a password policy can be set which supports the password policy requirements as outlined within the CJIS security policy For more information go here ArchivedAmazon Web Services – CJIS Compliance on AWS Page 12 Configuration Management (Ref CJIS Policy Area 7) • Amazon EC2 – A web service that provides resizable compute capacity in the cloud It provides you with complete control of your computing resources and lets you run Amazon Machine Images (AMI) For more information go here • Amazon Machine Image (AMI) – An Amazon Machine Image (AMI) provides the information required to launch an instance which is a virtual server in the cloud You specify an AMI when you launch an instance and you can launch as many instances from the AMI as you need You can also launch instances from as many different AMIs as you need For more information go here • Amazon Machine Images (AMIs) management – Organizations commonly ensure security and complia nce by centrally providing workload owners with pre built AMIs These “golden” AMIs can be preconfigured with host based security software and hardened based on predetermined security guidelines Workload owners and developers can then use the AMIs as star ting images on which to install their own software and configuration knowing the images are already compliant For more information go here • Choosing an AMI – While AWS d oes provide images that can be used for deployment of host operating systems you need to develop and implement system configuration and hardening standards to align with all applicable CJIS requirements for your operating systems For more information go here • AWS EC2 Security Groups – You can control how accessible your virtual instances in EC2 are by configuring built in firewall rules (Security Groups) – from totally public to completely private or somewhere in between For more information go here • Resource Tagging – Almost all AWS resources allow the addition of user defined tags These tags are metadata and irrelevant to the functionality of the resource but are critical for cost management and access control When multiple groups of users or multiple workload owners exist within the same AWS account it is important to restrict access to resources based on tagging Regardless of account structure ArchivedAmazon Web Services – CJIS Compliance on AWS Page 13 you can use tag based IAM policies to place extra security restrictions on critical resources For more information go here • AWS Config – A fully managed service that provides you with an AWS resource inventory configuration history and configuration change notifications to enable security and governance With AWS Config service you can immediately discover all of your AWS resources and view the configuration of each You can receive notifications each time a configuration changes as well as dig into the configuration history to perform incident analysis For more information go here • CloudFormation Templates – Creating preapproved AWS CloudFormation templates for common use cases Using templates allows CJIworkload owners to inherit the security implementation of the approved template thereby lim iting their authorization documentation to the features that are unique to their application Templates can be reused to shorten the time required to approve and deploy new applications For more information go here • AWS Service Catalog – Allows CJIS IT administrators to create manage and distribute portfolios of approved products to end users who can then access the products they need in a personalized portal Typical products include servers databases websites or applications that are deployed using AWS resources (for example an Amazon EC2 instance or an Amazon RDS database) For more information go here Media Protection & Information Integrity (Ref CJIS Policy Area 8 & 10) • AWS Storage Gateway – A service that connects an on premises software appliance to cloud based storage providing seamless and secure integration between your on premises IT environment and AWS’s storage infrastructure For more information go here • Storage – AWS provides various options for storage of information including Amazon Elastic Block Store (Amazon EBS) Amazon Simple Storage Service (Amazon S3) and Amazon Relational Database Service (Amazon RDS) to allow you to make data easily accessible to your appl ications or for backup purposes Before you store sensitive data you should use CJIS requirements for restricting direct inbound and outbound data to select the correct storage option ArchivedAmazon Web Services – CJIS Compliance on AWS Page 14 For example Amazon S3 can be configured to encrypt your data at rest with server side encryption (SSE) In this scenario Amazon S3 will automatically encrypt your data on write and decrypt your data on retrieval When Amazon S3 SSE encrypts data at rest it uses Advanced Encryption Standard (AES) 256 bit symmetric keys If you choose server side encryption with Amazon S3 you can use one of the following methods: o AWS Key Management Service (KMS) – A service that makes it easy for you to create and control the encryption keys used to encrypt your data AWS KMS uses Hardware Security Modules (HSMs) to protect the security of your keys For customers who use encryption extensively and require strict control of their keys AWS KMS provides a convenient management option for creating and administering the keys used to encrypt yo ur data at rest For more information go here o KMS Service Integration – AWS KMS seamlessly integrates with Amazon EBS Amazon S3 Amazon RDS Amazon Redshift Amazon Elastic Transcoder Amazon WorkMail and Amazon EMR This integration means that you can use AWS KMS master encryption keys to encrypt the data you store with these services by simply selecting a check box in the AWS Management Console For more information go here o AWS CloudHSM Service – A service that helps you meet corporate contractual and regulatory compliance re quirements for data security by using dedicated Hardware Security Module (HSM) appliances within the AWS cloud AWS CloudHSM supports a variety of use cases and applications such as database encryption Digital Rights Management (DRM) and Public Key Infr astructure (PKI) including authentication and authorization document signing and transaction processing For more information go here System and Communication Protection and Information Integrity (Ref CJIS Policy Area 10) • AWS Virtual Private Cloud (VPC) – You can use VPC to connect existing infrastructure to a set of logically isolated AWS compute ArchivedAmazon Web Services – CJIS Compliance on AWS Page 15 resources via a Virtual Private Network (VPN) connection and to extend existing management capabilit ies such as security services firewalls and intrusion detection systems to include virtual resources built on AWS For more information go here • AWS Direct Connect (DX) – AWS Direct Connect makes it easy to establish a dedicated network connection from your premises to AWS For more information go here • Perfect Forward Secrecy – For even greater communication privacy several AWS services such as AWS Elastic Load Balancer and Amazon CloudFront offer newer stronger cipher suites SSL/TLS clients can use these cipher suites to use Perfect Forward Secrec y a technique that uses session keys that are ephemeral and not stored anywhere This prevents the decoding of captured data even if the secret long term key itself is compromised • Protect data in transit – You should implement SSL encryption on your server instances You will need a certificate from an external certification authority like VeriSign or Entrust The public key included in the certificate authenticates each session and serves as the basis for creating the shared session key used to encrypt the data AWS security engineers and solution architects have developed whitepapers and operational checklists to help you select the best options for your needs and recommend security best practices For example guidance on securely storing and rotating or changing secret keys and passwords Conclusion There are few key points to remember in supporting CJIS work loads: Security is a shared responsibility as AWS doesn't manage the customer environment or data this means you are responsible for implementing the applicable CJIS Security Policy requirements in your AWS environment over and above the AWS implementation of security requirements within the infrastructure Encryption of data in transit and at rest is critical AWS provides several "key" resources to help you achieve this imp ortant solution From Solutions Architect personnel available to assist you to our Encrypting Data at Rest Whitepaper as ArchivedAmazon Web Services – CJIS Compliance on AWS Page 16 well as multiple Encryption leading practices AWS strives to provide the resources you need to implement secure solutions AWS directly addresses the relevant CJIS Security Policy requirements applicable to the AWS infrastructure As AWS provides a self provisioned platform that customers wholly manage AWS isn't directly subject to the CJIS Security Policy However we are absolutely committed to maintaining world class cloud security and complian ce programs in support of our customer needs AWS demonstrates compliance with applicable CJIS requirements as supported by our third party assessed frameworks (such as FedRAMP) incorporating on site data center audits by our FedRAMP accredited 3PAO In th e spirit of a shared responsibility philosophy the AWS CJIS Requirements Matrix and the CJIS Security Policy Workbook (in a system security plan template) ha ve been developed which aligns to the CJIS Policy Areas The Workbook is intended to support customers in systematically documenting their implementation of CJIS requirements alongside the AWS approach to each requirement (along with guidance on submitting the document for review and authorization) AWS provides multiple built in security features in support of CJIS workloads such as: • Secure access using AWS Identity and Access Management (IAM) with multi factor authentication • Encrypted data storage with either AWS provided options or customer maintained opti ons • Logging and monitoring with Amazon S3 logging AWS CloudTrail Amazon CloudWatch and AWS Trusted Advisor • Centralized customer controlled key management with AWS CloudHSM and AWS Key Management Ser vice (KMS) Further Reading For additional help see the following sources: • AWS Compliance Center: http://awsamazoncom/compliance ArchivedAmazon Web Services – CJIS Compliance on AWS Page 17 • AWS Security Center: http:// awsamazoncom/security • AWS Security Resources: http://awsamazoncom/security/security resources • FedRAMP FAQ: http://awsamazoncom/compliance/fedramp faqs/ • Risk and Compliance Whitepaper: https://d0awsstaticcom/whitepapers/compliance/AWS_Risk_and_Co mpliance_Whitepaperpdf • Cloud Architecture Best Practices Whitepaper: http://mediaamazonwebservicescom/AWS_Cloud_Best_Practicespdf • AWS Products Overview: http://awsamazoncom/products/ • AWS Sales and Business Development: https://awsamazoncom/compliance/public sector contact/ Document Revisions Date Description March 2017 Revised for 55 combined CJIS 54 Workbook and CJIS Whitepaper July 2015 First publication
General
Running_Containerized_Microservices_on_AWS
Running Containerized Microservices on AWS First Published November 1 2017 Updated August 5 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 Componentization Via Services 2 Orga nized Around Business Capabilities 4 Products Not Projects 7 Smart Endpoints and Dumb Pipes 8 Decentralized Governance 10 Decentralized Data Management 12 Infrastructure Automation 14 Design for Failure 17 Evolutionary Design 20 Conclusion 22 Contributors 23 Document Revisions 23 Abstract This whitepaper is intended for architects and developers who want to run containerized applications at scale in production on Amazon Web Services (AWS ) This document provides guidance for application lifecycle management security and architectural soft ware design patterns for container based applications on AWS We also discuss architectural best practices for adoption of containers on AWS and how traditional software design patterns evolve in the context of containers We leverage Martin Fowler’s prin ciples of microservices and map them to the twelve factor app pattern and real life considerations This whitepaper gives you a starting point for building microservices using best practices and software design patterns Amazon Web Services Running Containerized Microservices on AWS 1 Introduction As modern microservice sbased applications gain popularity containers are an attractive building block for creat ing agile scalable and efficient microservices architectures Whether you are considering a legacy system or a greenfield appli cation for containers there are well known proven software design patterns that you can apply Microservices are an architectural and organizational approach to software development in which software is composed of small independent services that commun icate to each other There are different ways microservices can communicate but the two commonly used protocols are HTTP request/response over w elldefined APIs and lightweight asynchronous messaging1 These services are owned by small selfcontained t eams Microservices architectures make applications easier to scale and faster to develop This enabl es innovation and accelerat es timetomarket for new features Containers also provide isolation and packaging for software and help you achieve more deployment velocity and resource density As proposed by Martin Fowler2 the characteristics of a microservices architecture include the following : • Componentization via services • Organized ar ound business capabilities • Products not projects • Smart endpoints and dum b pipes • Decentralized governance • Decentralized data management • Infrastructure automation • Design for failure • Evolutionary design These characteristics tell us how a microservices archit ecture is supposed to behave To help achieve these characteristics many development teams have adopted the twelve factor app pattern methodology The twelve factors are a set of best practices for building modern app lications that are optimized for cloud computing The twelve factors cover four key areas: deployment scale portability and architecture : Amazon Web Services Running Containerized Microservices on A WS 2 1 Codebase One codebase tracked in revision control many deploys 2 Dependencies Explicitly declare and isolate dep endencies 3 Config Store configurations in the environment 4 Backing services Treat backing services as attached resources 5 Build release run Strictly separate build and run stages 6 Processes Execute the app as one or more stateless processes 7 Port bind ing Export services via port binding 8 Concurrency Scale out via the process model 9 Disposability Maximize robustness with fast startup and graceful shutdown 10 Dev/prod parity Keep development staging and production as similar as possible 11 Logs Treat logs as event streams 12 Admin processes Run admin/management tasks as one off processes After reading this whitepaper you will know how to map the microservices design characteristics to twelve factor app patterns down to the design pattern to be implemented Componentization Via Services In a microservices architecture software is composed of small independent services that communicate over well defined APIs These small components are divided so that each of them does one thing and does it well while cooperat ing to deliver a full featu red application An analogy can be drawn to the Walkman portable audio cassette players that were popular in the 1980s : batteries bring power audio tapes are the medium headphones deliver output while the main tape player takes input through key presses Using them together plays music Similarly microservices need to be decoupled and each should focus on one functionality Additionally a microservices architecture allows for replacement or upgrade Using the Walkman analogy if the headphones are worn out you can replace them without replacing the tape player If an order management service in our store keeping application is falling behind and performing too slow ly you can swap it for a more performant more streamlined Amazon Web Services Running Containerized Microservices on AWS 3 component Such a permutatio n would not affect or interrupt other microservices in the system Through modularization microservices offer developers the freedom to design each feature as a black box That is microservices hide the details of their complexity from other components Any communication between services happens by using well defined APIs to prevent implicit and hidden dependencies Decoupling increases agility by removing the need for one development team to wait for another team to finish work that the first team depend s on When containers are used container images can be swapped for other container images These can be either different versions of the same image or different images altogether —as long as the functionality and boundaries are conserved Containerization is an operating system level virtualization method for deploying and running distributed applications without launching an entire virtual machine (VM) for each application Container images allow for modularity in services They are constructed by building functionality onto a base image Developers operation s teams and IT leaders should agree on base images that have the security and tooling profile that they want These images can then be shared throughout the organization as the initial building block Replacing or upgrading th ese base image s is as simple as updating the FROM field in a Dockerfile and rebuilding usually through a Continuous Integration/Continuous Delivery (CI/CD) pipeline Here are the key factors from the twelve factor app pattern methodology that play a role in componentization: • Dependencies (explicitly declare and isolate dependencies) – Dependencies are selfcontained within the container and not shared with other services • Disposability (maximize robustness with fast sta rtup and graceful shutdown) – Disposability is leveraged and satisfied by containers that are easily pulled from a repository and discarded when they stop running • Concurrency (scale out via the process model) – Concurrency consists of tasks or pods (made of containers working together ) that can be auto scaled in and out in a memory and CPU efficient manner As each business function is implemented as its own service the number of containerized services grow s Each service should have its own integration and its own deployment pipeline This increases agility Since c ontainerized services are subject to frequent deployments you need to introduce a coordination layer that that tracks which Amazon Web Services Running Containerized Microservices on AWS 4 containers are running on which hosts Eventually you will want a system that provides the state of containers the resource s available in a cluster etc Container orchestration and scheduling systems enable you to define applications by assembling a set of containers that work together You can think of the definitio n as the blueprint for your applications You can specify various parameters such as which containers to use and which repositories they belong in which ports should be opened on the container instance for the application and what data volumes should be mounted Container management systems enable you to run and maintain a specified number of instances of a container set —containers that are instantiated together and collaborate using links or volumes Amazon ECS refers to these as Tasks Kubernetes refers to them as Pods Schedulers maintain the desired count of container sets for the service Additionally the service infrastructure can be run behind a load balancer to distribute traffic acr oss the container set associated with the service Organized Around Business Capabilities Defining exactly what constitutes a microservice is very important for development teams to agree on What are its boundaries? Is an application a microservice? Is a shared library a microservice? Before microservices system architecture would be organized around technological capabilities such as user interface database and server side logic In a microservice s based approach as a best practice each development t eam owns the lifecycle of its service all the way to the customer For example a recommendations team might own development deployment production support and collection of customer feedback In a microservices driven organization small teams act auto nomously to build deploy and manage code in production This allows teams to work at their own pace to deliver features Responsibility and accountability foster a culture of ownership allowi ng teams to better align to the goals of their organization an d be more productive Microservices are as much an organizational attitude as a technological approach This principle is known as Conway’s Law : Amazon Web Services Running Containerized Microservices on AWS 5 "Organizations which design systems are constrained to produce designs which are copies of the communicatio n structures of these organizations" — M Conway3 When architecture and capabilities are organized around atomic business functions dependencies between components are loosely coupled As long as there is a communication contract between services and teams each team can run at its own speed With this approach the stack can be polyglot meaning that developers are free to use the programming languages that are optimal for their component For example the user interface can be written in JavaScript or HTML5 the backend in Java and data processing can be done in Python This means that business functions can drive development decisions Organizing around capabilities mean s that each API team owns the function d ata and performance completely The following are key factors from the twelve factor app pattern methodology that play a role in organizing around ca pabilities: • Codebase (one codebase tracked in revision control many deploys) – Each microservice owns its own codebase in a separate repository and throughout the lifecycle of the code change • Build release run (strictly separate build and run stages) – Each microservice has its own deployment pipeline and deployment frequency This enables the development teams to run microservices at varying speed s so they can be responsive to customer needs • Processes (execute the app as one or more stateless processe s) – Each microservice does one thing and does that one thing really well The micro service is designed to solve the problem at hand in the best possible manner • Admin processes (run admin/management tasks as one off processes) – Each micro service has its own admin istrative or management tasks so that it function s as designed To achieve a microservices architecture that is organized around business capabilities use popular microse rvices design patterns A design pattern is a general reusable solution to a commonly occurring problem within a giving context Amazon Web Services Running Containerized Microservices on AWS 6 Popular microservice design patterns4 5 6: • Aggregator Pattern – A basic service which invokes other services to gather t he required information or achieve the required functionality This is beneficial when you need an output by combining data from multiple microservices • API Gateway Design Pattern – API Gateway also acts as the entry point for all the microservices and cre ates fine grained APIs for different types of clients It can fan out the same request to multiple microservices and similarly aggregate the results from multiple microservices • Chained or Chain of Responsibility Pattern – Chained or Chain of Responsibility Design Patterns produces a single output which is a combination of multiple chained outputs • Asynchronous Messaging Design Pattern – In this type of microservices design pattern all the services can communicate with each other but they do not have to communicate with each other sequentially and they usually communicate asynchronously • Database or Shared Data Pattern – This design pattern will enable you to use a database per service and a shared database per service to solve various proble ms These problems can include duplication of data and inconsistency different services have different kinds of storage requirements few business transactions can query the data and with multiple services and d enormalization of dat a • Event Sourcing Des ign Pattern – This design pattern helps you to create events according to change of your application state • Command Query Responsibility Segregator (CQRS) Design Pattern – This design pattern enables you to divide the command and query Using the common CQRS pattern where t he command part will handle all the requests related to CREATE UPDATE DELETE while the query part will take care of the materialized views • Circuit Breaker Pattern – This design pattern enables you to stop the process of the request an d response when the service is not working For example when you need to redirect the request to a different service after certain number of failed request intents Amazon Web Services Running Containerized Microservices on AWS 7 • Decomposition Design Pattern – This design pattern enables you to decompose an application based on business capability or on based on the sub domains Products Not Projects Companies that have mature applications with successful software adoption and who want to maintain and expand their user base will likely be more successful if t hey focus on the experience for their customers and end users To stay healthy simplify operations and increase efficiency your e ngineering organization should treat software components as products that can be iteratively improved and that are constantl y evolving This is in contrast to the strategy of treating software as a project which is completed by a team of engineers and then handed off to an operations team that is responsible for running it When software architecture is broken into small micro services it becomes possible for each microservice to be an individual product For internal microservice s the end user of the product is another team or service For an external microservice the end user is the customer The core benefit of treating so ftware as a product is improved end user experience When your organization treats its software as an always improving product rather than a oneoff project it will produce code that is better architected for future work Rather than taking shortcuts that will cause problems in the future engineers will plan software so that they can continue to maintain it in the long run Software planned in this way is easier to operate maintain and extend Your c ustomers appreciate such dependable software because t hey can trust it Additionally when engineers are responsible for building delivering and running software they gain more visibility into how their software is performing in real world scenarios which accelerates the feedback loop This makes it easier to improve the software or fix issues The following are key factors from the twelve factor app pattern methodology that play a role in adopt ing a product mindset for delivering software: • Build release run – Engineers adopt a devops culture that allows them to optimize all three stages • Config – Engineers build better configuration management for software due to their involvement with how that software is used by the customer Amazon Web Services Running Containerized Microservices on AWS 8 • Dev/prod parity – Software treated as a product can be it eratively developed in smaller pieces that take less time to complete and deploy than long running projects which enables development and production to be closer in parity Adopting a product mindset is driven by culture and process —two factors that drive change The goal of your organization’s engineering team should be to break down any walls between the engineers who build the code and the engineers who run the code in production The following concepts are crucial: • Automat ed provisioning – Operations should be automated rather than manual This increases velocity as well as integrates engineering and operations • Selfservice – Engineers should be able to configure and provision their own dependencies This is enabled by containerized envi ronments that allow engineers to build their own container that has anything they require • Continuous Integration – Engineers should check in code frequently so that incremental improvements are available for review and testing as quickly as possible • Cont inuous Build and Delivery – The process of building code that’s been checked in and delivering it to production should be automated so that engineers can release code without manual intervention Containerized microservices help engineering organizations i mplement these best practice patterns by creating a standardized format for software delivery that allows automation to be built easily and used across a variety of different environments including local quality assurance and production Smart Endpoints and Dumb Pipes As your engineering organization transition s from building monolithic architecture s to building microservices architecture s it will need to understand how to enable communications between microservices In a monolith the various component s are all in the same process In a microservices environment components are separated by hard boundaries At scale a microservices environment will often have the various components distributed across a cluster of servers so that they are not even neces sarily collocated on the same server This means there are two primary forms of communication between services: Amazon Web Services Running Containerized Microservices on AWS 9 • Request/Response – One service explicitly invokes another service by making a request to either store data in it or retrieve data from it For e xample when a new user creates an account the user service makes a request to the billing service to pass off the billing address from the user’s profile so that that billing service can store it • Publish/Subscribe – Event based architecture where one se rvice implicitly invokes another service that was watching for an event For example when a new user creates an account the user service publishes this new user signup event and the email service that was watching for it is triggered to email the user asking them to verify their email One architectural pitfall that generally leads to issues later on is attempting to solve communication requirements by building your own complex enterprise service bus for routing messages between microservices AWS recomme nds using a message broker such as Amazon MSK Amazon Simple Notification Service (Amazon SNS) or Amazon Simple Queue Service (Amazon SQS ) Microservices architectures favor these tools because they enable a decentralized approach in which the endpoints that produce and consume messages are smart but the pipe between the endpoints is dumb In other words concentrate the logic in the containers and refrain from leveraging (and coupling to) sophisticated buses and messaging services Network communication often plays a central role in distributed systems Service meshes strive to address this issue Here you can leverage the idea of externalizing selected functionalities Service meshes work on a sidecar pattern where you add containers to extend the behavior of existing containers Sidecar is a microservices design pattern where a companion service runs next to your pr imary microservice augmenting its abilities or intercepting resources it is utilizing AWS App Mesh a sidecar container Envoy is used as a proxy for all ingress and egress traffic to the primary microservice Using this sidecar pattern with Envoy you can create the backbone of the service mesh without impacting our applications a service mesh is comprised of a control plane and a data plane In current implemen tations of service meshes the data plane is made up of proxies sitting next to your applications or services intercepting any network traffic that is under the management of the proxies Envoy can be used as a communication bus for all traffic internal to a service oriented architecture (SOA) Sidecars can also be used to build monitoring solutions When you are running microservices using Kubernetes there are multiple observability strategies one of them is using sidecars Due to the modular nature of the sidecars you can use it for your logging and monitoring needs For e xample you can setup FluentBit or Firelens for Amazon Web Services Running Containerized Microservices on AWS 10 Amazon ECS to send logs from containers to Amazon CloudWatch Logs AWS Distro for Open Telemetry can also be used for gathering metrics and sending metrics off to other services Recently AWS has launched managed Prometheus and Grafana for the monitoring/ visualization use cases The core benefit of building smart endpoints and dumb pipes is the ability to decentralize the architecture particularly when it comes to how endpoints are maintained updated and e xtended One goal of microservices is to enable parallel work on different edges of the architecture that will not conflict with each other Building dumb pipes enables each microservice to encapsulate its own logic for formatting its outgoing responses or suppl ementing its incoming requests The following are the key factors from the twelve factor app pattern methodology that play a role in building smart endpoints and dumb pipes: • Port Binding – Services bind to a port to watch for incoming requests and send requests to the port of another service The pipe in between is just a dumb network protocol such as HTTP • Backing services – Dumb pipes allow a background microservice to be attached to another microservice in the same way that you attac h a database • Concurrency – A properly designed communication pipeline between microservices allows multiple microservices to work concurrently For example several observer microservices may respond and begin work in parallel in response to a single even t produced by another microservice Decentralized Governance As your organization grows and establishes more code driven business processes one challenge it could face is the necessity to scale the engineering team and enable it to work efficiently in par allel on a large and diverse codebase Additionally your engineering organization will want to solve problems using the best available tools Decentralized governance is an approach that works well alongside microservices to enable engineering organizati ons to tackle this challenge Traffic lights are a great example of decentralized governance City traffic lights may be timed individually or in small groups or they may react to sensors in the pavement However for the city as a whole there is no need for a primary traffic control center in order to keep cars moving Separately implemented local optimizations work together to provide a city wide Amazon Web Services Running Containerized Microservices on AWS 11 solution Decentralized governance helps remove potential bottlenecks that would prevent engineers from bein g able to develop the best code to solve business problems When a team kicks off its first greenfield project it is generally just a small team of a few people working together on a common codebase After the greenfield project has been completed the bus iness will quickly discover opportunities to expand on their first version Customer feedback generates ideas for new features to add and ways to expand the functionality of existing features During this phase engineers will start grow ing the codebase an d your organization will start divid ing the engineering organization into service focused teams Decentralized governance means that each team can use its expertise to choose the best tools to solve their specific problem Forcing all teams to use the same database or the same runtime language isn’t reasonable because the problems they ’re solving aren’t uniform However d ecentralized governance is not without boundaries It is helpful to use standards throughout the organization such as a standard build and code review process because this helps each team continue to function together Source control plays an important role in the decentralized governance Git can be used as a source of truth to operate the deployment and governance strategies For example version control history peer review and rollback can happen through Git withou t needing to use additional tools With GitOps automated delivery pipelines roll out changes to your infrastructure when changes are made by pull request to Git GitOps also makes use of tools that compares the production state of your application with what’s under source control and alerts you if your running cluster doesn’t match your desired state The following are the principles for GitOps to work in practice : 1 Your entire system described declaratively 2 A desired system state version controlled in Git 3 The ability for changes to be automatically applied 4 Software agents that verify correct system state and alert on divergence Most CI/CD tools available today use a push based model A push based pipeline means that code starts with the CI system and then continues its path through a series of encoded scripts in your CD system to push changes to the destination The reason you don’t want to use y our CI/CD system as the basis for your deployments is because of the potential to expose credentials outside of your cluster While it is possible to secure your CI /CD scripts you are still working outside the trust domain of your cluster Amazon Web Services Running Containerized Microservices on AWS 12 which is not rec ommended With a pipeline that pulls an image from the repository your cluster credentials are not exposed outside of your production environment The following are the key factors from the twelve factor app pattern methodology that play a role in enablin g decentralized governance: • Dependencies – Decentralized governance allows teams to choose their own dependencies so dependency isolation is critical to make this work properly • Build release run – Decentralized governance should allow teams with differ ent build processes to use their own toolchains yet should allow releasing and running the code to be seamless even with differing underlying build tools • Backing services – If each consumed resource is treated as if it was a third party service then de centralized governance allows the microservice resources to be refactored or developed in different ways as long as they obey an external contract for communication with other services Centralized governance was favored in the past because it was hard to efficiently deploy a polyglot application Polyglot applications need different build mechanisms for each language and an underlying infrastructure that can run multiple languages and frameworks Polyglot architectures had varying dependencies which coul d sometimes have conflicts Containers solve th ese problem s by allowing the deliverable for each individual team to be a common format: a Docker image that contains their component The contents of the container can be any type of runtime written in any l anguage However the build process will be uniform because all containers are built using the common Dockerfile format In addition all containers can be deployed the same way and launched on any instance since they carry their own runtime and dependenci es with them An engineering organization that chooses to employ decentralized governance and to use containers to ship and deploy this polyglot architecture will see that their engineering team is able to build performant code and iterate more quickly Decentralized Data Management Monolithic architectures often use a shared database which can be a single data store for the whole application or many applications This leads to complexities in changing schemas upgrades downtime and dealing with backward compatibility risks A Amazon Web Services Running Containerized Microservices on AWS 13 service based approach mandates that each service get its own data storage and doesn’t share that d ata directly with anybody else All data bound communication should be enabled via services that encompass the data As a result each service team chooses the most optimal data store type and schema for their application T he choice of the database type is the responsibility of the service teams It is an example of decentralized decision making with no central group enforcing standards apart from minimal guidance on connectivity AWS offers many fully managed storage servic es such as object store key value store file store block store or traditional database Options include Amazon S3 Amazon DynamoDB Amazon Relational Database Service (Amazon RDS ) and Amazon Elastic Block Store (Amazon EBS) Decentralized data manag ement enhances application design by allowing the best data store for the job to be used This also removes the arduous task of a shared database upgrade which could be weekends worth of downtime and work if all goes well Since each service team owns it s own data its decision making become s more independent The teams can be self composed and follow their own development paradigm A secondary benefit of decentralized data management is the disposability and fault tolerance of the stack If a particular data store is unavailable the complete application stack does not become unresponsive Instead the application goes into a degraded state losing some capabilities while still servicing requests This enables the application to be fault tolerant by desi gn The following are the key factors from the twelve factor app pattern methodology that play a role in organizing around capabilities: • Disposability (maximize robustness with fast startup and graceful shutdown ) – The services should be robust and not dep endent on externalities This principle further allows for the services to run in a limited ca pacity if one or more components fail • Backing services (treat backing services as attached resources ) – A backing service is any service that the app consumes over the network such as data stores messaging systems etc Typically backing services are managed by operations The app should make no distinction between a local and an external service • Admin pro cesses (run admin/management tasks as one off processes ) – The process es required to do the app’s regular business for example running Amazon Web Services Running Containerized Microservices on AWS 14 database migrations Admin processes should be run in a similar manner irrespective of environments To achieve a micr oservices architecture with decoupled data management the following software design patterns can be used: • Controller – Helps direct the request to the appropriate data store using the appropriate mechanism • Proxy – Helps provide a surrogate or placeholder for another object to control access to it • Visitor – Helps represent an operation to be performed on the elements of an object structure • Interpreter – Helps map a service to data store semantics • Observer – Helps define a one tomany dependency between objects so that when one object changes state all of its dependents are notified and updated automatically • Decorator – Helps attach additional responsibilities to an object dynamically Decorators provide a fl exible alternative to sub classing for extending functionality • Memento – Helps capture and externalize an object's internal state so that the object can be returned to this state later Infrastructure Automation Contemporary architectures whether monolit hic or based on microservices can greatly benefit from infrastructure level automation With the introduction of virtual machines IT teams were able to easily replicate environments and create templates of operating system states that they wanted The ho st operating system became immutable and disposable With cloud technology the idea bloomed and scale was added to the mix There is no need to predict the future when you can simply provision on demand for what you need and pay for what you use If an en vironment isn’t needed anymore you can shut down the resources On demand provisioning can be combined with spot compute7 which enables you to request unused compute capacity at steep discounts One useful mental image for infrastructure ascode is to p icture an architect’s drawing come to life Just as a blueprint with walls windows and doors can be transformed into Amazon Web Services Running Containerized Microservices on AWS 15 an actual building so load balancers databases or network equipment can be written in source code and then instantiated Microservices not only need disposable infrastructure ascode they also need to be built tested and deployed automatically Continuous integration and continuous delivery are important for monoliths but they are indispensable for microservices Each service needs i ts own pipeline one that can accommodate the various and diverse technology choices made by the team An automated infrastructure provides repeatability for quickly setting up environments These environments can each be dedicated to a single purpose: dev elopment integration user acceptance testing ( UAT) or performance testing and production Infrastructure that is described as code and then instantiated can eas ily be rolled back This drastically reduces the risk of change and in turn promotes innova tion and experiments The following are the key factors from the twelve factor app pattern methodology that play a role in evolutionary design : • Codebase (one codebase tracked in revision control many deploys ) – Because the infrastructure can be described as code treat all code similarly and keep it in the service repository • Config (store config urations in the environmen t) – The environment should hold and share its own specificities • Build release run (strictly sepa rate build and run stages ) – One environment for each purpose • Disposability (maximize robustness with fast startup and graceful shutdown ) – This factor transcends the process layer and bleeds into such downstream layers as containers virtual machines and virtual private cloud • Dev/prod parity – Keep development staging and production as similar as possible Successful applications use some form of infrastructure ascode Resources such as databases container clusters and load balancers can be instant iated from description To wrap the application with a CI /CD pipeline you should choose a code repository an integration pipeline an artifact building solution and a mechanism for deploying these artifacts A microservice should do one thing and do it well This implies that when you build a full application there will potentially be a large number of services Each of these Amazon Web Services Running Containerized Microservices on AWS 16 need their own integration and deployment pipeline Keeping infrastructure automation in mind architects who face this challenge of proliferating services will be able to find common solutions and replicate pipelines that have made a particular service successful An image repository should be used in the CI/CD pipeline to push the containerized image of the microservice We have v arious popular image repositories such as Amazon ECR Redhat Quay Docker Hub JFrog Container registries can be used as part of the infrastructure automation As previously described in the Decentralized Gover nance section GitOps is a popular operational framework for achieving Continuous Delivery Git is used as single source of truth for deploying into your cluster Tools such as Flux runs in your cluster and implements changes based on monitoring Git and image repositories Flux keeps an eye on image repositories detects new images and updates the running configurations based on a configurable policy Continuous Delivery (CD) tools such as ArgoCD Spinnaker can also be leveraged for immediate autonomous deployment to production environments Ultimately the goal is to enable developers to push code updates to container image repositories and have the updated container images of the application sent to multiple environments in minutes There are many ways to successfully deploy in phases including the blue/green and canary methods With the blue/green deployment two environments live side by side with one of them running a newer version of the application Traffic is sent to the older version until a swi tch happens that route s all traffic to the new environment You can see an example of this happening in this reference architecture Blue/green deployment Amazon Web Services Running Containerized Microservices on AWS 17 In this case we use a switch of target groups behind a load balancer in order to redirect traffic from the old to the new resources Another way to achieve this is to use services fronted by two load balancers and operate the switch at the DNS level Design for Failure “Everything fails all the time” – Werner Vogels This adage is not any less true in the container world than it is for the cloud Achieving high availability is a top priority for workloads but remains an arduous undertaking for development teams Modern applications running in containers should not be tasked with managing the underlying layers from physical infrastructure like electricity sources or environmental controls all the way to the stability of the underlying operating system If a set of contai ners fails while tasked with deliver ing a service these containers should be re instantiated automatically and with no delay Similarly as microservices interact with each other over the network more than they do locally and synchronously connections ne ed to be monitored and managed Latency and timeouts should be assumed and gracefully handled More generally microservices need to apply the same error retries and exponential backoff with jitter as advised with applications running in a networked environment8 Designing for failure also means testing the design and watching services cope with deteriorating conditions Not all technology departments need to apply th is principle to the extent that Netflix does9 10 but we encourage you to test these mechanisms often Designing for failure yields a self healing infrastructure that acts with the maturity that is expected of recent workloads Preventing emergency calls guarantees a base level of satisfaction for the service owning team This also removes a level of stress that can otherwise grow into accelerated attrition Designing for failure will deliver greater uptime for your products It can shield a company from outages that could erode customer trust Here are the key factors from the twelve factor app pattern methodology that play a role in designing for failure: • Disposabilit y (maximize robustness with fast startup and graceful shutdown ) – Produce lean container images and striv e for processes that can start and stop in a matter of seconds Amazon Web Services Running Containerized Microservices on AWS 18 • Logs (treat logs as event streams ) – If part of a system fail s troubleshooting is nece ssary Ensure that material for forensics exists • Dev/prod parity – Keep development staging and production as similar as possible AWS recomme nds that container hosts be part of a self healing group Ideally container management systems are aware of di fferent data centers and the microservices that span across them mitigating possibl e events at the physical level Containers offer an abstraction from operating system management You can treat container instances as immutable servers Containers will behave identically on a developer’s laptop or on a fleet of virtual machines in the cloud One very useful container pattern for hardening an application’s resiliency is the circuit break er With circuit breakers such as Resilience4j Hystrix an application container is proxied by a container in charge of monitoring connection attempts from the application container If connections are successful the circuit breaker container remains in closed status letting communication happen When connections start failing the circuit breaker logic triggers If a pre defined threshold for failure/success ratio is breached the container enters an open status that prevents more connections This mech anism offers a predictable and clean breaking point a departure from partially failing situations that can render recovery difficult The application container can move on and switch to a backup service or enter a degraded state One other useful containe r pattern for application’s resilience is the using Service Mesh which forms a network of microservices communicating with each other Tools such as AWS App Mesh Istio have been available recently to manage and monitor such service meshes Services meshe s have sidecars which refers to a separate process that is installed along with the service in a container set Important feature of the sidecar is that all communication to and from the service is routed through the sidecar process This redirection of co mmunication is completely transparent to the service This service meshes offer several resilience patterns which can be activated by rules in the sidecar and these are Timeout Retry and Circuit Breaker Modern container management services allow develo pers to retrieve near real time event driven updates on the state of containers Docker supports multiple logging drivers (list as of Docker v 2010 ): 11 12 Amazon Web Services Running Containerized Microservices on AWS 19 Driver Description none No logs will be available for the container and Docker logs will not return any output jsonfile The logs are formatted as JSON The default logging driver for Docker syslog Writes logging messages to the syslog facility The syslog daemon must be running on the host machine journald Writes log messag es to journal d The journald daemon must be running on the host machine gelf Writes log messages to a Graylog Extended Log Format (GELF) endpoint such as Graylog or Logstash fluentd Writes log messages to fluentd (forward input) The fluentd daemon must be running on the host machine awslogs Writes log messages to Amazon CloudWatch Logs splunk Writes log messages to splunk using the HTTP Event Collector etwlogs Writes log messages as Event Tracing for Windows (ETW) events Only available on Windows platforms gcplogs Writes log messages to Google Cloud Platform (GCP) Logging local Logs are stored in a custom format designed for minimal overhead logentries Writes log messages to Rapid7 Logentries Sending these log s to the appropriate destination becomes as easy as specifying it in a key/value manner You can then define appropriate metrics and alarms in your monitoring solution Another way to collect telemetry and troubleshooting material from containers is to link a logging container to the application container in a pattern generically referred to as sidecar More specifically in the case of a container working to standardize and normalize the output the pattern is known as an adapter Contain er monitoring is another approach for tracking the operation of a containerized application These system s collect metrics to ensure application running on containers are performing properly Container monitoring solutions use metric capture analytics Amazon Web Services Running Containerized Microservices on AWS 20 transaction tracing and visualization Container monitoring covers basic metrics like memory utilization CPU usage CPU limit and memory limit Container monitoring also offers the real time streaming logs tracing and observability that containers need Containers can also be leveraged to ensure that various environments are as similar as possible Infrastructure ascode can be used to turn infrastructure into templates and easily replicate one footprint Evolutionary Design In modern systems architecture design you need to assume that you don’t have all the requirements up front As a result having a detailed design phase at the beginning of a project becomes impractical The services have to evolve through various iteratio ns of the software As services are consumed there are learnings from real world usage that help ev olve their functionality An example of this could be a silent inplace software update on a device While the feature is rolled out an alpha /beta testing strategy can be used to understand the behavior in real time The feature can be then rolled out more broadly or rolled back and worked on using the feedback gained Using deployment techniques such as a canary release a new feature can be tested in an accelerated fashion against it s target audience This provid es early fe edback to the development team As a result of the evolutionary design principle a service team can build the minimum viable set of features needed to stand up the stack and roll it ou t to users The development team doesn’t need to cover edge cases to roll out features Instead the team can focus on the needed pieces and evolve the design as customer feedback comes in At a later stage the team can decide to refactor after they feel confident that they have enough feedback Conducting periodical product workshops also helps in evolution of product design The following are the key factors from the twelve factor app pattern methodology that play a role in evolutionary design: • Codebase (one codebase tracked in revision control many deploys ) – Helps evolve features faster since new feedback can be quickly incorporated • Dependencies (explicitly declare and isolate dependencies ) – Enables quick iterations of the design since features are t ightly coupled with externalities Amazon Web Services Running Containerized Microservices on AWS 21 • Configuration (store configurations in the environment ) – Everything that is likely to vary between deploys (staging production developer environments etc) Config varies substantially across deploys code does not With configurations stored outside code the design can evolve irrespective of the environment • Build release run (strictly separate build and run stages ) – Help roll out new features using various deployment techniques Each release has a specific ID and can be used to gain design efficiency and user feedback The following software design patterns can be used to achieve an evolutionary design : • Sidecar extend s and enhance s the main service • Ambassador creates helper services that send network requests on behalf of a consumer service or application • Chain provides a defined order of starting and stopping containers • Proxy provide s a surrogate or placeholder for another object to control access to it • Strategy defines a family of algorithms encapsulate s each one and make s them interchangeable Strategy lets the algorithm vary independently from the clients that use it • Iterator provides a way to access the elements of an aggregate object sequentially wi thout exposing its underlying representation • Service Mesh is a dedicated infrastructure layer for facilitating service toservice communications between microservices using a proxy Containers provide additional tools to evolve design at a faster rate wi th image layers As the design evolves each image layer can be added keeping the integrity of the layers unaffected Using Docker an image layer is a change to an image or an intermediate image Every command (FROM RUN COPY etc) in the Dockerfile causes the previous image to change thus creating a new layer Docker will build only the layer that was changed and the ones after that This is called layer caching Using layer caching deployment times can be reduced Deployment strategies such as a Canary release provide added agility to evolve design based on user feedback Canary release is a technique that’s used to reduce the risk inherent in a new software version release In a canary release the new software is Amazon Web Services Running Containerized Microservices on AWS 22 slowly rolled out to a small subset of users before it’s rolled out to the entire infrastructure and made available to everybody In the diagram that follows a canary release can easily be implemented with containers using AWS primitives As a container announces its health via a health check API the canary directs more traffic to it The state of the canary and the execution is maintained using Amazon DynamoDB Amazon Route 53 Amazon CloudWatch Amazon Elastic Container Service (Amazon ECS) and AWS Step Functions Canary deployment with containers Finally usage monitoring mechanisms ensure that development teams can evolve the design as the usage patterns change with variables Conclusion Microservices can be designed using the twelve factor app pattern methodology an d software design patterns enable you to achieve this easily These software design patterns are well known If applied in the right context they can enable the design benefits of microservices AWS provides a wide range of primitives that can be used to enab le containerized microservices Amazon Web Services Running Containerized Microservices on AWS 23 Contributors The following individuals contributed to this document: • Asif Khan Technical Business Development Manager Amazon Web Services • Pierre Steckmeyer Solutions Architect Amazon Web Service • Nathan Peck Developer Advocate Amazon Web Services • Elamaran Shanmugam Cloud Architect Amazon Web Services • Suraj Muraleedharan Senior DevOps Consultant Amazon Web Services • Luis Arcega Technical Account M anager Amazon Web Services Document Revisions Date Descript ion August 5 2021 Whitepaper updated with latest technical content November 1 2017 First publication Notes 1 https://docsmicrosoftcom/en us/dotnet/architecture/microservices/architect microserv icecontainer applications/communication inmicroservice architecture 2 https://martinfowlercom/articles/microserviceshtml 3 https://enwikipediaorg/wiki/Conway's_law 4 https://microservicesio/patterns/microserviceshtml 5 https://d zonecom/articles/design patterns formicroservices 6 https://docsawsamazoncom/prescriptive guidance/latest/modernization integrating microservices/welcomehtml Amazon Web Services Running Containerized Microservices on AWS 24 7 https://awsamazoncom/blogs/containers/running airflow workflow jobsonamazon eksspotnodes/ 8 https://docsawsamazoncom/general/latest/gr/api retrieshtml 9 https://githubcom/netflix/chaosmonkey 10 https://githubcom/Netfl ix/SimianArmy 11 https://docsdockercom/engine/admin/logging/overview/ 12 https://wwweksworkshopcom/intermediate/230_logging/
General
Lambda_Architecture_for_Batch_and_RealTime_Processing_on_AWS_with_Spark_Streaming_and_Spark_SQL
Lambda Architecture for Batch and Stream Processing October 2018 This paper has been archived For the latest technical content about Lambda architecture see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers Archived © 2018 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its c ustomers Archived Contents Introduction 1 Overview 2 Data Ingestion 3 Data Transformation 4 Data Analysis 5 Visualization 6 Security 6 Getting Started 7 Conclusion 7 Contributors 7 Further Reading 8 Document Revisions 8 Archived Abstract Lambda architecture is a data processing design pattern to handle massive quantities of data and integrate batch and real time processing within a single framework (Lambda architecture is distinct from and should not be confused with the AWS Lambda comput e service ) This paper covers the building blocks of a unified architectural pattern that unifies stream (real time) and batch proces sing After reading this paper you should have a good idea of how to set up and deploy the components of a typical Lambda architecture on AWS This white paper is intended for Amazon Web Services (AWS) Partner Network (APN) members IT infrastructure decision makers and administrators ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 1 Introduction When processing large amounts of semi structured data there is usually a delay between the point when data is collected and its availability in reports and dashboards Often the delay results from the need to validate or at least identify granular data I n some cases however being able to react immediately to new data is more important than being 100 percent certain of the data’s validity The AWS services frequently used to analyze large volumes of data are Amazon EMR and Amazon Athena For ingesting and processing s tream or real time data AWS services like Amazon Kinesis Data Streams Amazon Kinesis Data Firehose Amazon Kinesis Data Analytics Spark Streaming and Spark SQL on top of an Amazon EMR cluster are widely used Amazon Simple Storage Servic e (Amazon S3) forms the backbone of such architectures providing the persistent object storage layer for the AWS compute service Lambda a rchitecture is an approach that mixes both batch and stream (real time) data processing and makes the combined data available for downstream analysis or viewing via a serving layer It is divided into three layers: the batch layer serving layer and speed layer Figure 1 shows the b atch layer (batch processing) serving layer (merged serving layer) and speed layer (stream processing) In Figure 1 data is sent both to the batch layer and to the speed layer (stream processing) In the batch layer new data is appended to the master data set It consists of a set of records containing information that cannot be derived from the existing data It is an immutable append only dataset This process is analogous to extract transform and load (ETL) processing The results of the batch layer are called batch views and are stored in a persis tent storage layer The serving layer indexes the batch views produced by the batch layer It is a scalable Figure 1: Lambda Architecture ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 2 data store that swaps in new batch views as they become available Due to the latency of the batch layer the results from the serving layer are outofdate The speed layer compensates for the high latency of updates to the serving layer from the batch layer The speed layer processes data that has not been processed in the last batch of the batch layer This layer produces the real time views that are always up todate The speed layer is responsible for creating realtime views that are continuously discarded as data makes its way through the batch and serving layers Queries are resolved by merging the batch and real time views Recomputing data from scratch helps if the batch or real time views become corrupt ed This is because the main data set is append only and it is easy to restart and recover from the unstable state The end user can always query the latest version of the data which i s available from the speed layer Overview This section provides an overview of the various AWS services that form the building blocks for the batch serving and speed layers of lambda architecture Each of the layers in the Lambda architecture can be built using various analytics streaming and storage services available on the AWS platform Figure 2: Lambda Architecture Building Blocks on AWS The batch layer consists of the landing Amazon S3 bucket for storing all of the data ( eg clickstream server device logs and so on ) that is dispatched from one or more data sources The raw data in the landing bucket can be extracted and transformed into a batch view for analytics using AWS Glue a fully managed ETL service on the AWS platform Data analysis is performed u sing services like Amazon Athena an interactive query service or managed Hadoop framework using Amazon EMR Using Amazon QuickSight customer s can also perform visualization and onetime analysis ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 3 The speed layer can be built by using the following three options available with Amazon Kinesis : • Kinesis Data Stream s and Kinesis Client Library (KCL) – Data from the data source can be continuously captured and stream ed in near real time using Kinesis Data Stream s With the Kinesis Client Library ( KCL) you can build your own application that can preprocess the streaming data as they arrive and emit the data for generating incremental view s and downstream analysis • Kinesis Data Firehose – As data is ingested in real time customer s can use Kinesis Data Firehose to easily batch and compress the data to generate incremental views Kinesis Data Firehose also allows customer to execute their custom transformation logic using AWS Lambda before delivering the incremental view to Amazon S3 • Kinesis Data Analytics – This service provides the easiest way to process the data that is streaming through Kinesis Data Stream or Kinesis Data Firehose using SQL This enable s customer s to gain actionable insight in near real time from the incremental stream before storing it in Amazon S3 Finally the servin g layer can be implemented with Spark SQL on Amazon EMR to process the data in Amazon S3 bucket from the batch layer and Spark Streaming on an Amazon EMR cluster which consumes data directly from Amazon Kinesis streams to create a view of the entire dataset which can be aggregated merged or joined The merged data set can be written to Amazon S3 for further visualization Both of these components are part of the same code base which can be invoked as required thus reducing the overhead of maintaining multiple code bases The metadata ( eg table definition and schema) associated with the processed data is stored in the AWS Glue catalog to make the data in the batch view i mmediately available for queries by downstream analytics services in the batch layer Customer can use a Hadoop based stream processing application for analytics such as Spark Streaming on Amazon EMR Data Ingestion The data ingestion step comprises data ingestion by both the speed and batch layer usually in parallel For the batch layer historical data can be ingested at any desired interval For the speed layer the fastmoving data must be captured as it is produced and streamed for analysis The data is immutable time tagged or time ordered Some examples of high velocity data include log collection website clickstream logging social media stream and IoT device event data This fast da ta is captured and ingested as part of the speed layer using Amazon Kinesis Data Stream s which is the recommended service to ingest streaming data into AWS Kinesis offers key capabilities to cost effectively process and durably store streaming data at any scale Customers can use Amazon Kinesi s Agent a pre built application to collect and send data to ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 4 an Amazon Kinesis stream or use the Amazon Kinesis Producer Library (KP L) as part of a custom application For batch ingestions customers can use AWS Glue or AWS Database Migration Service to read from source systems such as RDBMS Data Warehouses and No SQL databases Data Transformation Data transformation is a key step in the Lambda architecture where the data is manipulated to suit downstream analysis The raw data ingested into the system in the previous step is usually not conducive to analytics as is The transformation step involves data cleansing that includes deduplication incomplete data management and attribute standardization It also involves changing the data structures if necessary usually into an OLAP model to facilitate easy querying of data Amazon Glue Amazon EMR and Amazon S3 form the set of services that allow users to transform their data Kinesis analytics enables users to get a view into their data stream in real time which makes downstream integration to batch data easy Let’s dive deeper into data transformation and look at the various steps involved: 1 The data ingested via the batch mechanism is put into an S3 staging location This data is a true copy of the source with little to no transformation 2 The AWS Glue Data Catalog is updated with the metadata of the new files The Glue Data Catalog can integrate with Amazon Athena Amazon EMR and forms a central metadata repository for the data 3 An AWS Glue job is used to transform the data and store it into a new S3 location for integration with realtime data AWS Glue provide s many canned transformations but if you need to write your own transformation logic AWS Glue also supports custom scripts 4 Users can easily query data on Amazon S3 using Amazon Athena This helps in making sure there are no unwanted data elements that get into the downstream bucket Getting a view of source data upfront allows development of more targeted metrics Designing analytical applications without a view of source data or getting a very late view into the source data could be risky Since Amazon Athena uses a schema onread approach instead of a schema onwrite it allows users to query data as is and eliminates the risk 5 Amazon Athena integrates with Amazon Quick Sight which allows users to build reports and dashboards on the data 6 For the real time ingestions the data transformation is applied on a window of data as it pass es through the steam and analyzed iteratively as it comes into the stream Amazon Kinesis Data Streams Kinesis Data Firehose and Kinesis Data Analytics allow you to ing est analyze and dump real time data into storage platforms like Amazon S3 for integration with batch data Kinesis Data Streams interfaces with Spark ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 5 streaming which is run on an Amazon EMR cluster for further manipulation Kinesis Data A nalytics allow s you to run analytical queries on the data stream in real time which allows you to get a view into the source data and make sure aligns with what is expected from the dataset By following the preceding steps you can create a scalable data transformatio n platform on AWS It is also important to note that Amazon Glue Amazon S3 Amazon Athena and Amazon Kinesis are serverless services By using these services in the transformation step of the Lambda architecture we can remove the overhead of maintaining servers and scaling them when the volume of data to transform increases Data Analysis In this phase you apply your query to analyze data in the three layers : • Batch Layer – The data source for batch analytics could be the raw master data set directly or the aggregated batch view from the serving layer The focus of this layer is to increase the accuracy of analysis by querying a comprehensive dataset across multiple or all dimensions and all available data sources • Speed Layer – The focus of the analysis in this layer is to analyze the incoming streaming data in near real time and to react immediately based on the analyzed result within accepted levels of accuracy • Serving Layer – In this layer the merged query is aimed at joining and analy zing the data from both the batch view from the batch layer and the incremental stream view from the speed layer This suggested architecture on the AWS platform includes Amazon Athena for the batch layer and Amazon Kinesis Data Analytics for the speed layer For the serving layer we recommend using Spark Streaming on an Amazon EMR cluster to consume the data fr om Amazon Kinesis Data S treams from the speed layer and using Spark SQL on an Amazon EMR cluster to consume data from Amazon S3 in the b atch layer Both of these components are part of the same code base which can be invoked as required thus reducing the overhead of maintaining multiple code bases The sample code that follows highlights using Spark SQL and Spark streaming to join data from both batch and speed layer s ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 6 Figure 2: Sample Code Visualization The final step in the Lambda architecture workflow is metrics visualization The visualization layer receives data from the batch stream and the combined serving layer The purpose of this layer is to provide a unified view of the analysis metrics that were derived from the data analysis step Batch Layer: The output of the analysis metrics in the batch layer is generated by Amazon Athena Amazon QuickSight integrates with Amazon Athena to generate dashboards that can be used for visualizations Customers also have a choice of using any other BI tool that supports JDBC/ODBC connectivity These tools can be connected to Amazon Athena to visualize batch layer metrics Stream Layer: Amazon Kinesis Data Analytics allows users to build custom analytical metrics that change based on real time streaming data Customers can use Kinesis Data A nalytics to build near realtime dashboards for metrics analyzed in the streaming layer Serving Layer: The combined dataset for batch and stream metrics are stored in the serving layer in an S3 bucket This unified view of the data is available for customers to download or connect to a reporting tool like Amazon QuickSight to create dashboards Security As part of the AWS Shared Responsibility M odel we recommend customers use the AWS security best practices and features to build a highly secure platform to run Lambda architecture on AWS Here are some points to keep in mind from a security perspective: • Encrypt end to end The architecture proposed here makes use of services that support encryption Make use of the native encryption features of the service whenever possible The server side encryption (SSE) is the least disruptive way to ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 7 encrypt your data on AWS and allows you to integrate encryption features into your existing workflows without a lot of code changes • Follow the rule of minimal access when working with policies Identity and access management (IAM) policies can be made very granular to allow customers to create restrictive resource level policies This concept is also exte nded to S3 bucket policies Moreover customers can use S3 object level tags to allow or deny actions at the object level Make use of these capabilities to ensure the resources in AWS are used securely • When working with AWS services make use of IAM role instead of embedding AWS credentials • Have an optimal networking architecture in place by carefully considering the security groups a ccess control lists (ACL) and routing tables that exist in the Amazon Virtual Private Cloud (Amazon VPC ) Resources that do not need access to the internet should not be in a public subnet Resources that require only outbound internet access should make use of the n etwork address translation (NAT) gateway to allow outbound traffic Communication to Amazon S3 from within th e Amazon VPC should make use of the VPC endpoint for Amazon S3 or a AWS private link Getting Started Refer to the AWS Big Data blog post Unite Real Time and Batch Analytics Using the Big Data Lambda Architecture Without Servers! which provides a walkthrough of how you can use AWS services to build an end toend Lambda architecture Conclusion The Lambda architecture described in this paper provides the building blocks of a unified architectural pattern that unifies stream (real time) and batch processing within a single code base Through the use of Spark Streaming and Spark SQL APIs you implement your business logic function once and then reuse the code in a batch ETL process as well as for real time streaming processes In this way you can quickly implement a real time layer to complement the batch processing one In the long term this archit ecture will reduce your maintenance overhead It will also reduce the risk for errors resulting from duplicate code bases Contributors The following individuals and organizations contributed to this document: • Rajeev Sriniv asan Solutions Architect Amazo n Web Services • Ujjwal Ratan S olutions Architect Amazon Web Services ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 8 Further Reading For additional information see the following : • AWS Whitepapers • Data Lakes and Analytics on AWS Document Revisions Date Description October 2018 Update May 2015 First publication Archived
General
Overview_of_Oracle_EBusiness_Suite_on_AWS
This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtml Overview of Oracle E Business Suite on AWS First Published May 2017 Updated September 10 2021 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 2 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 3 Contents Introduction 5 AWS overview 5 Amazon Web Services concepts 6 Region s and Availability Zones 6 Elastic Load Balancing 7 Amazon Elastic Block Store (Amazon EBS) 8 Amazon Machine Image (AMI) 8 Amazon S imple Storage Service (Amazon S3) 8 Amazon Route 53 8 Amazon Virtual Private Cloud (Amazon VPC) 8 Amazon Elastic File System (Amazon EFS) 9 AWS security and compliance 9 Oracle E Business Suite on AWS 9 Oracle E Business Suite components 10 Oracle E Business Suite architecture on AWS 11 Benefits of Oracle E Business Suite on AWS 15 Oracle E Business Suite on AWS use cases 18 Conclusion 18 Contri butors 18 Further reading 19 Document versions 19 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 4 Abstract Oracle E Business Suite is a popular suite of integrated business applications for automating enterprise wide processes like customer relationship management financial management and supply chain management Th is is the first whitepaper in a series focused on Oracle E Business Suite on Amazon Web Services (AWS) It provides an architectural overview for running Oracle E Business Suite 122 on AWS The whitepaper series is intended for customers and partners who want to learn about the benefits and options for running Oracle E Busines s Suite on AWS Subsequent whitepapers in this series will discuss advanced topics and outline best practices for high availability security scalability performance migration disaster recovery and management of Oracle E Business Suite systems on AWS This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overv iew of Oracle E Business Suite on AWS 5 Introduction Almost all large enterprises use enterprise resource planning (ERP) systems for managing and optimizing enterprise wide business processes Cloud adoption among enterprises is growing rapidly with many adopting a cloud first strategy for new projects and migrating their existing systems from on premises to AWS ERP systems such as Oracle E Business Suite are mission c ritical for most enterprises and figure prominently in considerations for planning an enterprise cloud migration This whitepaper provide s a brief overview of Oracle E Business Suite and a reference architecture for deploying Oracle E Business Suite on AWS It also discuss es the benefits of running Oracle E Business suite on AWS and various use cases AWS overview AWS provides on demand computing resources and services in the cloud with pay as yougo pricing As of the date of this publication AWS serves over a million active customers in more than 190 countries and is available in 25 AWS Regions worldwide You can run a server on AWS and log in configure secure and operate it just as you would operate a server in your own data center Using AWS resources for your compute needs is like purchasing electricity from a power company instead of runn ing your own generator and it provides many of the same benefits: • The capacity you get exactly matches your needs • You pay only for what you use • Economies of scale result in lower costs • The service is provided by a vendor who is experienced in running l argescale compute and network systems This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 6 Amazon Web Services concepts This section describes the AWS infrastructure and services that are part of the reference architecture for running Oracle E Business Suite on AWS Regions and Availability Zones Each Region is a separate geographi c area isolated from the other R egions Regions provide you the ability to place resources such as Amazon Elastic Compute Cloud (Amazon EC2) instances and data in multiple locations Resourc es aren't replicated across R egions unless you do so specifically An AWS account provides multiple Regions so you can launch your application in locations that meet your requirements For example you might w ant to launch your application in Europe to be closer to your European customers or to meet legal requirements Each Region has multiple isolated locations known as Availability Zones Each Availability Zone runs on its own physically distinct independe nt infrastructure and is engineered to be highly reliable Common points of failure such as generators and cooling equipment are not shared across Availability Zones Because Availability Zones are physically separate even extremely uncommon disasters such as fires tornados or flooding would only affect a single Availability Zone Each Availability Zone is isolated but the Availability Zones in a Region are connected through low latency links The following figure illustrates the relationship between Regions and Availability Zones Relationship between AWS Regions and Availability Zones This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 7 The following figure shows the Regions and the number of Availability Zones in each Region provided by an AWS account at the time of this publication For the most current list of Regions and Availability Zones see Global Infrastructure Note : You can’t describe or access additional Regions from the AWS GovCloud (US) Region or China (Beijing) Region Map of AWS Regions and Availability Zones Amazon Elastic Compute Cloud (Amazon EC2) Amazon EC2 is a web service that provides resizable compute capacity in the cloud billed by the hour or second (minimum of 60 seconds) You can run virtual machines (EC2 instances) ranging in size from one vCPU and one GB memory to 448 vCPU and 6six TB memory You have a choice of operating systems including Windows Server 2008/2012 /2016/2019 Oracle Linux Red Hat Enterprise Linux and SUSE Linux Elastic Load Balanc ing Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances containers and IP addresses in one or mor e Availability Zones o n AWS Cloud It enables you to achieve greater levels of fault tolerance in your applications seamlessly providing the required amount of load balancing capacity needed to distribute application traffic Elastic Load Balancing can be used for load balancing web server traffic This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 8 Amazon Elastic Block Store (Amazon EBS) Amazon EBS provides persistent block level storage volumes for use with EC2 instances in the AWS Cloud Each Amazon EBS volume is automatically replicated within its Availability Zone to protect you from component failure offering high availability and durability EBS volumes offer the consistent and low latency performance needed to run your workloads Amazon Machine Image (AMI) An Amazon Machine Image (AMI) is simply a packaged up environment that includes all the necessary bits to set up and boot your instance Your AMIs are your unit of deployment A mazon EC2 uses Amazon EBS and Amazon Simple Storage Service (Amazon S3) to provide reliable scalable storage of your AMIs so th ey can boot when you need them Amazon Simple Storage Service (Amazon S3) Amazon S3 provides developers and IT teams with secure durable highly scalable object storage Amazon S3 is easy to use It provides a simple web services interface you can use to store and retrieve any amount of data from anywhere on the web With Amazon S3 you pay only for the storage you actually use There is no minimum fee and no setup cost Amazon Route 53 Amazon Route 53 is a highly available and scalable clou d Domain Name System (DNS) web service It is designed to give developers and businesses an extremely reliable and costeffective way to route end users to internet applications by translating names like wwwexamplecom into the numeric IP address Amazon Virtual Private Cloud (Amazon VPC) Amazon VPC enables you to provision a logically isolated section of the AWS Cloud in which you can launch AWS resources in a virtual network that you define You have complete control over your virtual networking environment including selection of your own private IP address range creation of subnets and configuration of route tables and network gateways This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 9 You can use multiple layers of security including security groups and network access control lists to help control access to EC2 instances in each subnet Additionally you can create a Hardware Virtual Private Network (VPN) connection between your corporate data center and your VPC and use the AWS Cloud as an extension of your corporate data center Amazon Elastic File System (Amazon EFS) Amazon EFS is a file storage service for EC2 instances Amazon EFS supports the NFS v4 protocol so the applications and tools that you use today work seamlessly with Amazon EFS Multiple EC2 instances can access an Amazon EFS file system at the same time providing a common data source for workloads and applications running on more than one instance With Amazon EFS storage capacity is elastic growing and shrinking automatically as you add and remove files so your applications have the storage they need when the y need it AWS security and compliance The AWS Cloud security infrastructure has been architected to be one of the most flexible and secure cloud computing environments available today Security on AWS is very similar to security in your on premises data center —but without the costs and complexities involved in protecting facilities and hardware AWS provides a secure global infrastructure plus a range of features that you can use to help secure your systems and data in the cloud To learn more see AWS Cloud Security AWS compliance enables customers to understand the robust controls in place at AWS to maintain security and data protection in the cloud AWS engages with external certifying bodies and indepe ndent auditors to provide customers with extensive information regarding the policies processes and controls established and operated by AWS To learn more see AWS Compliance Oracle E Business Suite o n AWS This section cover s the major components of Oracle E Business Suite and its architecture on AWS It is important to have a good understanding of Oracle E Business Suite architecture and its major components to successfully deploy and configure it on AWS This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 10 Oracle E Business Suite components Oracle E Business Suite has a three tier architecture consisting of client application and database ( DB) tiers Oracle E Business Suite three tier architecture The client tier contains the client user interface which is provided through HTML or Java applets in a web browser for forms based applications The application tier consists of Oracle Fusion Middleware (Oracle HTTP Server and Oracle WebLogic Server) and the concurrent processing server The Fus ion Middleware server has HTTP Java and Forms services that process the business logic and talk to the database tier The Oracle HTTP Server (OHS) accepts incoming HTTP requests from clients and routes the requests to the Oracle Web Logic Server (WLS) which hosts the business logic and other server side components The HTTP services forms services and concurrent processing server can be installed on multiple application tier nodes and load balanced The database tier consists of an Oracle database tha t stores the data for Oracle E Business Suite This tier has the Oracle database run items and the Oracle database files that physically store the tables indexes and other database objects in the system See the Oracle E Business Suite Concepts guide for a deeper dive on the Oracle E Business Suite architecture components This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of O racle E Business Suite on AWS 11 Oracle E Business Suite architecture on AWS The following reference diagram illustrates how Oracle E Business Suite can be deployed on AWS The application and database tiers are deployed across multiple Availability Zones for high availability Sample Oracle E Business Suite deployment on AWS User requests from the client tier are routed using Amazon Route53 DNS to the Oracle EBusiness Suite application servers deployed on EC2 instances through Application Load Balancer The OHS and the Oracle WLS are deployed on each application tier instance The OHS accept s the requests from Application Load Balancer and route s them to the Oracle WLS The Oracle WLS runs the appropriate business logic and communicate s with the Oracle database The various modules and functions within Oracle E Business Suite share a common data model There is only one Oracle d atabase instance for multiple application tier nodes This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 12 Load balancing and high availability Application Load Balancer is used to distribute incoming traffic across multiple application tier instances deployed across multiple Availability Zones You can add and remove application tier instances from your load balancer as your needs change without disrupting the overall flow of information Application Load Balancer ensures that only healthy instances receive traffic by detecting unhealthy instances and rerou ting traffic across the remaining healthy instances If an application tier instance fails Application Load Balanc er automatically reroutes the traffic to the remaining running application tier instances In the unlikely event of an Availability Zone fai lure user traffic is routed to the remaining application tier instances in the other Availability Zone Other third party load balancers like the F5 BIG IP are available on AWS Marketplace and can be used as well See My Oracle Support document 13756861 for more details on using load balancers with Oracle E Business Suite (sign in required) The database tier is deployed on Oracle running on two EC2 instances in different Availability Zones Oracle Data Guard replication (maximum protection or maximum availability mode) is configured between the primary database in one Availability Zone and a standby database in another Availability Zone In case of failure of the primary database the standby database is promoted as the primary and the application tier instances will connect to it For more details on deploying Oracle Database on AWS see the Oracle Database on AWS Quick Start Scalability When using AWS you can scale your application easily due to the elastic nature of the cloud You can scale up the O racle E Business Suite application tier and database tier instances simply by changing the instance type to a larger instance type For example you can start with an r 5large instance with two vCPUs and 1 6 GiB RAM and scale up all the way to an x1 e32xlar ge instance with 128 vCPUs and 3904 GiB RAM After selecting a new instance type only a restart is required for the changes to take effect Typically the resizing operation is complete d in a few minutes the EBS volumes remain attached to the instances and no data migration is required You can scale out the application tier by adding and configuring more application tier instances when required You can l aunc h a new EC2 instance in a few minutes This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 13 However additional work is required to ensure that the AutoConfig files are correct and the new application tier instance is correctly configured and registered with the database Although it might be possible to automate scaling out the application tier using scripting this require s an additional technical investment A simpler alternative might be to use standby EC2 instances as explained in the next section Standby EC2 i nstances To meet extra capacity requirements additional application tier instances of Oracle E Business Suite can be pre installed and configured on EC2 instances These standby instances can be shut down until extra capacity is required Charges are not incurred when EC2 instances are shut down —only EBS storage charges are incurred At the time of this publication EBS General Purpose (gp2) volumes are priced at $010 per GB per month in the US East ( Ohio ) Region Therefore for an EC2 instance with 120 GB hard disk drive ( HDD ) space the storage ch arge is only $12 per month These preinstalled standby instances provide you the flexibility to use these instances for meeting additional capacity needs as and when required In this model you need to ensure that any configuration changes/patching/maint enance activities are also applied to the standby node to avoid inconsistencies Storage options and backup AWS offers a complete range of cloud storage services to support both application and archival compliance requirements You can choose from object file block and archival services The following table list s some of the storage options and how they can be used when deploying Oracle E Business Suite on AWS Table 1 – Storage options and how they can be used Storage type Storage characteristics Oracle E Business Suite use case Amazon EBS – gp2/gp3 volumes SSDbased block storage with up to 16000 input/output operations per second ( IOPS ) per volume Boot volumes operating system and software binaries Oracle database archive logs This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracl e EBusiness Suite on AWS 14 Storage type Storage characteristics Oracle E Business Suite use case Amazon EBS – io1/io2/io2 Block Express volumes SSDbased block storage with up to 64000 IOPS per volume Multiple volumes can be striped together for higher IOPS By attaching io2 volumes to r5b instan ce types you can achieve up to 256000 IOPS per volume Storage for the database tier – ASM disks Oracle data files redo logs Amazon EFS Highly durable NFSv41 compatible file system PCP out and log files media staging Amazon S3 Object store with 99999999999% durability Backups archives media staging Amazon Glacier Extremely low cost and highly durable storage for long term backup and archival Long term backup and archival Amazon EC2 instance storage Ephemeral or temporary storage data persists only for the lifetime of the instance Swap temporary files reports cache Web Server cache The application and database servers use EBS volumes for persistent block storage Amazon EBS has two types of solidstate drive ( SSD)backed volumes : provisioned IOPS SSD (io 1 io2 io2 Block Express ) for latency sensitive database and application workloa ds and general purpose SSD (gp2 gp3) that balance s price and performance for a wide variety of transactional workloads including dev elopment and test environments and boot volumes General purpose SSD volumes provide good balance between price and performance and can be used for boot volumes the Oracle E Business Suite application tier file system and logs They are designed to offer single digit millisecond latencies and deliver a consistent baselin e performance of 3 IOPS/GB for gp2 and 3000 IOPS regardless of volume size for gp3 to a maximum of 1 6000 IOPS per volume Provisioned IOPS volumes are the highest performance EBS storage option and should be used along with Oracle Automatic Storage Manag ement (ASM) for storing the Oracle database data and log files You can provision up to 64000 IOPS per io1/io2 volume and 256000 per io2 Block Express These volumes are designed to achieve single digit This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 15 millisecond latencies and to deliver the provisione d IOPS 999% for i01 and 99999% of the time for io2 and io2 Block Express You can use Oracle ASM to stripe the data across multiple EBS volumes for higher IOPS and to scale the database storage To maximize the performance of EBS volumes use EBSoptimized EC2 instances and instances based on the AWS Nitro System EC2 instances have temporary SSD based block storage called instance stora ge Instance storage persists only for the lifetime of the instance and should not be used to store valuable long term data Instance storage can be used as swap space and for storing temporary files like the report cache or web server cache If you are u sing Oracle Linux as the operating system for the database server you can use the instance storage for the Oracle Database Smart Flash Cache and improve the database performance Parallel Concurrent Processing (PCP) allows you to distribute concurrent managers across multiple nodes so that you can use the available capacity and provide failover You can use a shared file system such as Amazon EFS for storing the log and out files while implementing PCP in Oracle E Business Suite However this configuration may not be ideal for environments with an extremely large number of log and out files Oracle E Business Suite Release 122 introduced a new environment variable APPLLDM to specify whether log and out files are stored in a single directory for all Oracle E Business Suite products or in one subdirectory per product APPLLDM can be set to ‘single’ or ‘product’ ‘Product’ will avoid highest concentration of log and out files in a single directory and may avoid perf ormance issues Amazon S3 provides low cost scalable and highly durable storage and should be used for storing backups You can use Oracle Recovery Manager (RMAN) to back up your database then copy the data to Amazon S3 Alternatively you can use the O racle Secure Backup (OSB) Cloud Module to back up your database The OSB Cloud Module is fully integrated with RMAN features and functionality and the backups are sent directly to Amazon S3 for storage Benefits of Oracle E Business Suite on AWS The follo wing sections discuss some of the key benefits of running Oracle E Business Suite on AWS This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 16 Agility and speed Traditional deployment involves a long procurement process in which each stage is timeintensive and requires large capital outlay and multiple approvals With AWS you can provision new infrastructure and Oracle E Business Suite environments in minutes compared to waiting weeks or months to procure and deploy traditional infrastructure Lower total cost of ownership In an o npremise s environment you typically pay hardware support costs virtualization licensing and support data center costs and so on You can eliminate or reduce all of these costs by moving to AWS You benefit from the economies of scale and efficiencies provided by AWS and pay only for the compute storage and other resources you use Cost savings for nonproduction environments You can shut down your non production environments when you are not using them and save costs For example if you are using a development environment for only 40 hours a week ( eight hours a day five days a week) you can shut down the environment when it’ s not in use You pay only for 40 hours of Amazon EC2 compute charges instead of 168 hours (24 hours a day seven days a week) for an on premises environment running all the time; this can result in a saving of 75% for EC2 compute charges Replace capital expenditure ( CapEx ) with operating expenditure (OpEx ) You can s tart an Oracle E Business Suite implementation or project on AWS without any upfront cost or commitment for compute storage or network infrastructure Unlimited environments In an o npremise s environment you usually have a limited set of environments to work with; provisioning additional environments take s a long time or might not be possible at all You do not face these restrictions when using AWS ; you can create virtually any number of new environments in minutes as required You can have a different environment for each major project so that each team can work independently with the resources they need without interfering with other teams ; the teams can then converge at a common integrati on environment when they are This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 17 ready You can shut down these environments when the project finishes and stop paying for them Have Moore’s Law work for you instead of against you Moore's Law refers to the observation that the number of transistors on a microchip doubles every two years In an on premises environment you end up owning hardware that depreciat es in value every y ear You are locked into the price and capacity of the hardware after it is acquired plus you have ongoing hardware support costs With AWS you can switch your underlying instances to the faster more powerful next generation AWS instance types as they b ecome available Right size anytime Customer s often oversize environments for initial phases and are then unable to cope with growth in later phases With AWS you can scale the usage up or down at any time You pay only for the computing capacity you use for the duration you use it Instance sizes can be changed in minutes through the AWS Management Console or the AWS Application Programming Interface (API) or Command Line Interface (CLI) Assess the resource usage on current system and launch with appr opriate size instances for the enterprise resource planning ( ERP) environment to reduce the cost overhead Lowcost disaster recovery You can build extremely low cost standby disaster recovery environments for your existing deployments and incur costs only for the duration of the outage CloudEndure Disaster Recovery for Oracle brings significant savin gs on disaster recovery total cost of ownership ( TCO ) compared to traditional disaster recovery solution s Ability to test application performance Although performance testing is recommended prior to any major change to an Oracle EBusiness Suite environme nt most customers only performance test their Oracle E Business Suite application during the initial launch in the yet tobedeployed production hardware Later releases are usually never performance tested due to the expense and lack of environment requi red for performance testing With AWS you can minimize the risk of discovering performance issues later in production An AWS Cloud environment can be created easily and quickly just for the duration of the performance test and only used when needed Aga in you are charged only for the hours the environment is used This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 18 No end of life for hardware or platform All hardware platforms have endoflife dates at which point the hardware is no longer supported and you are forced to buy new hardware again In the A WS Cloud you can simply upgrade the platform instances to new AWS instance types in a single click at no cost for the upgrade Oracle E Business Suite on AWS use cases Oracle E Business Suite customers are using AWS for a variety of use cases including the following environments: • Migration of existing Oracle E Business Suite production environments • Implementation of new Oracle E Business Suite production environments • Implementing disaster recovery environments • Running Oracle E Business Suite development test demonstration proof of concept (POC) and training environments • Temporary environments for migrations and testing upgrades • Temporary environments for performance testing Conclusion AWS can be an extremely cost effective secure scala ble high perform ing and flexible option for deploying Oracle E Business Suite This whitepaper outline s some of the benefits and use cases for deploying Oracle E Business Suite on AWS If you are looking for migration specific guidance see the Migrating Oracle E Business Suite on AWS whitepaper Subsequent whitepapers in this series will cover advanced topics and outline best practices for high availability security scalability performance disaster recovery and management of Oracle E Business Suite systems on AWS Contributors Contributors to this document include : • Ejaz Sayyed Sr Partner Solutions Architect Amazon Web Services • Praveen Katari Partner Managemen t Solutions Architect Amazon Web Services This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ overvieworacleebusinesssuite/overvieworacle ebusinesssuitehtmlAmazon Web Services Overview of Oracle E Business Suite on AWS 19 • Ashok Sundaram Principal Solutions Architect Amazon Web Services Further reading For additional information see: • AWS Whitepapers & Guides • AWS Cloud Security • AWS Compliance • Oracle R122 Document • Using Load Balancers with Oracle EBS (Sign in to Oracle required) • Oracle Database on AWS • AWS EBS Optimized instances • Oracle APPLLDM document (Sign in to Oracle required) Document version s Date Description September 10 2021 Updated logos new EBS storage and EC2 instance types performance metrics May 2017 First publication
General
Move_Amazon_RDS_MySQL_Databases_to_Amazon_VPC_using_Amazon_EC2_ClassicLink_and_Read_Replicas
ArchivedMove Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas July 2017 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchived© 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AW S agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedContents Introduction 1 Solution Overview 1 ClassicLink and EC2 Classic 2 RDS Read Replicas 2 RDS Snapshot s 2 Migration Topology 3 Migration Steps 5 Step 1: Enable ClassicLink for the Target VPC 6 Step 2: Set up a Proxy Server on an EC2 Classic Instance 6 Step 3: Use ClassicLink Between the Proxy Server and Target VPC 7 Step 4: Configure the DB Instance (EC2 Classic) 8 Step 5: Create a User on the DB Instance (EC2 Classic) 9 Step 6: Create a Temporary Read Replica (EC2 Classic) 9 Step 7: Enable Backups on the Read Replica (EC2 Classic) 10 Step 8: Stop Replication on Read Replica (EC2 Classic) 11 Step 9: Create Snapshot from the Read Replica (EC2 Classic) 12 Step 10: Share the Snapshot (Optional) 13 Step 11: Restore the Snapshot in the Target VPC 15 Step 12: Enable Backups on VPC RDS DB Instance 17 Step 13: Set up Replication Between VPC and EC2 Classic DB Instances 18 Step 14: Switch to the VPC RDS DB Instance 19 Step 15: Take a Snapshot of the VPC RDS DB Instance 20 Step 16: Change the VPC DB Instance to be ‘Privately’ Access ible (Optional) 20 Step 17: Move the VPC DB Instance into Private Subnets (Optional) 21 Alternative Approaches 22 AWS Database Migration Service (DMS) 22 ArchivedChanging the VPC Subnet for a DB Instance 23 Conclusion 24 Contributors 24 Further Reading 25 Appendix A: Set Up Proxy Server in Classic 25 ArchivedAbstract Amazon Relational Database Service (Amazon RDS) makes it easy to set up operate and scale a rel ational database in the cloud If your Amazon Web Services ( AWS ) account was created before 2013 chances are you m ight be running Amazon RDS MySQL in an Amazon Elastic Compute Cloud ( EC2 )Classic environment and you are looking to migrate Amazon RDS into a n Amaz on EC2 Amazon Virtual Private Cloud ( VPC ) environment This whitepaper outlines the requirements and detailed steps needed to migrate Amazon RDS MySQL databases from EC2 Classic to EC2 VPC with minimal downtime using RDS MySQL Read Replicas and ClassicLink ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 1 Introduction There are two Amazon Elastic Compute Cloud (EC2) platforms that host Amazon Relational Database Service (RDS) database (DB) instances EC2 VPC and EC2 Classic On the EC2 Classic platform your instances run in a single flat network that you share with other customers On the EC2 VPC platform your instances run in a virtual private cloud (VPC) that’s logically isolated to your AWS account This logical network isolation closely resembles a traditional network you might op erate in your own data center plus it has the benefits of the AWS scalable infrastruc ture If you’re running RDS DB instances in an EC2 Classic environment you might be considering migrating your databases to Amazon VPC to take advantage of its features and capabilities However migrating databases across environments can involve complex backup and restore operations with longer down times that you might not be able to tolerate in your production environment This whitepaper focuses on how to use RDS read replica and snapshot capabilities to migrate a n RDS MySQL DB instance in EC2 Classic to a VPC over ClassicLink By leveraging RDS MySQL replication with ClassicLink you can migrate your databases easily and securely with minimal down time Alternative m ethods are also discussed Solution Overview This solution uses EC2 ClassicLink to enable an RDS DB instance in EC2 Classic (that is outside a VPC) to communicate to a VPC First a read replica of the DB instance in EC2 Classic is created Then a snapshot of the read replica (called the source DB instance ) is taken and used to set up a read replica in the VPC A ClassicLink proxy server enables communication between the source DB instance in EC2 Classic and the target read replica in the VPC Once the target read replica in the VPC has caught up with the source DB instance in EC2 Classic updates against the source are stopped and the target read replica is promoted At this point the connection details in any application that is reading or writing to the database are updated The source database remains fully operational during the migration minim izing downtime to applications Each of these components is explain ed in further detail as follows ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 2 ClassicLink and EC2 Classic EC2 ClassicLink allows you to connect EC2 Classic instances to a VPC within the same AWS R egion This allows you to associate VPC security groups with the EC2 Classic instance s enabling communication between EC2 Classic instances and VPC instances using private IP addresses The asso ciation between VPC security groups and the EC2 Classic instance removes the need to use public IP addresses or Elastic IP addresses to enable communic ation between these platforms ClassicLink is available to all users with accounts that support the EC2 Classic platform and can be used with any EC2 Classic instance Using ClassicLink and private IP address space for migration ensures all communication and data migration happens within the AWS network without requiring a public IP address for your DB instan ce or an Internet Gateway (IGW) to be set up for the VPC RDS Read Replicas You can create one or more read replicas of a given source RDS MySQL DB instance and serve high volume application read traffic from multiple copies of your data Amazon RDS uses the MySQL engine ’s native asynchronous replication to update the read replica whenever there is a change to the source DB instance The read replica operates as a DB instance that allows only read only connections; applications can connect to a read replic a just as they would to any DB instance Amazon RDS replicates all databases in the source DB instance Read replicas can also be promoted so that they become standalone DB instances RDS Snapshot s The ClassicLink solution relies on Amazon RDS snapshots t o initially create the target MySQL DB instance in your VPC Amazon RDS creates a storage volume snapshot of your DB instance backing up the entire DB instance and not just individual databases When you create a DB snapshot you need to identify which DB instance you are going to back up and then give your DB snapshot a name so you can restore from it later Creating this DB snapshot on a single Availability Zone ( AZ) DB instance results in a brief I/O suspension that typically lasts no more than a few m inutes Multi AZ DB instances are not ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 3 affected by this I/O suspension since the backup is taken on the standby instance Migration Topology ClassicLink allows you to link your EC2 Classic DB instance to a VPC in your account within the same Region After y ou've linked a n EC2 Classic DB instance it can communicate with instances in your VPC using their private IP addresses However instances in the VPC cannot directly access the AWS services provisioned by the EC2 Classic platform using ClassicLink So to migrate an RDS database from EC2 Classic to VPC you must set up a proxy server The proxy server uses ClassicLink to link the source DB instance in EC2 Classic to the VPC Port forwarding on the proxy server allows communication between the source DB instance in EC2 Classic and the target DB instance in the VPC This topology is illustrated in Figure 1 Figure 1: Topology for m igration in the same account If you ’re moving your RDS database to a different account you will need to set up a peering conne ction between the local VPC and the target VPC in the remote account This topology is illustrated in Figure 2 ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 4 Figure 2: Topology for m igration to a different account Figure 3 illustrates how the snapshot of the DB instance is used to set up a read replica in the target VPC Figure 3: Creating a read replica snapshot and restoring in VPC A ClassicLink proxy enables communication between the source RDS DB instance in EC2 Classic and the target VPC replica as illustrated in Figure 4 ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 5 Figure 4: Set ting up replication between the Classic and VPC read replica Figure 5 illustrates how updates against the source DB instance are stopped and the VPC replica is promoted to master status Figure 5: Cutting over application to the VPC RDS DB i nstance Migration Steps This section lists the steps you need to perform to migrate your RDS DB instance from EC2 Classic to VPC using ClassicLink ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 6 Step 1: Enable ClassicLink for the Target VPC In the Amazon VPC console from the VPC Dashboard select the VPC for which you want to enable ClassicLink select Actions in the drop down list and select Enable ClassicLink Then choose Yes Enable as shown below : Figure 6: Enabling ClassicLink Step 2 : Set up a Proxy Server on an EC2Classic Instance Install a proxy server on an EC2 Classic instance The proxy server forwards traffic to and from the RDS instance in EC2 Classic You can use an open source package such as NGINX for port forwarding For detailed information on setting up NGINX see Appendix A Set up appropriate security groups so the proxy server can communicate with the RDS instance in EC2 Classic In the following example the proxy server and the RDS instance in EC2 Classic are members of the same security group that allows traffic within the security group ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 7 Figure 7: Security group setup Step 3: Use ClassicLink Between the Proxy Server and Target VPC In the Amazon EC2 console from the EC2 Instances Dashboard select the EC2 Classic instance running the proxy server and choose ClassicLink on the Actions drop down list to create a ClassicLink connection with the target VPC Select the appropriate security group so that the proxy server can communicate with the RDS DB instance in your VPC In the example in Figure 8 SG A1 is selected Next choose Link to VPC ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 8 Figure 8: ClassicLink connection to VPC security group Step 4: Configure the DB Instance (EC2 Classic) In the Amazon RDS console from the RDS Dashboard under Parameter Groups select the parameter group associated with the RDS DB instance and use Edit Parameters to ensure the innodb_flush_log_at_trx_commit parameter is set to 1 (the default) This ensure s ACID compliance For more information see http://tinyurlcom/innodb flush logattrxcommit This step is necessary only if the value has been changed from the default of 1 Figure 9: Parameter group values on a Classic DB i nstance ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 9 Step 5: Create a U ser on the DB Instance (EC2 Classic) Connect to the RDS DB instance running in EC2 Classic via mysql client to create a user and grant permissions to replicate data Prompt> mysql h classicrdsinstance123456789012us east 1rdsamazonawscom P 3306 u hhar –p MySQL [(none)]> create user replicationuser identified by 'classictoVPC123'; Query OK 0 rows affected (001 sec) MySQL [(none)]> grant replication slave on ** to replicationus er; Query OK 0 rows affected (001 sec) Step 6: Create a Temporary Read Replica (EC2 Classic) Use a temporary read replica to create a snapshot and ensure that you have the correct information to set up replication on the new VPC DB instance In the Amazo n RDS console from the RDS Dashboard under Instances select the EC2 Classic DB instance and select Create Read Replica DB Instance Specify your re plication instance information Figure 10: Classic read replica instance properties ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 10 You then need to spec ify the network and security properties for the replica Figure 11: Classic read replica network and security properties Step 7: Enable Backups on the Read Replica (EC2 Classic) From the RDS Dashboard under Instances select the Read Replica in EC2 Classic and use Modify DB Instances to set the Backup Retention Period to a nonzero number of days Setting this parameter to a positive number enables automated backups ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 11 Figure 12: Enabling b ackups Step 8: Stop Replication on R ead Replica (EC2 Classic) When you are ready to switch over c onnect to the RDS replica in EC2 Classic via a mysql client and issue the mysqlrds_stop_ replication command Prompt> mysql h classicrdsreadreplica1chd3laahf8xlus east 1rdsamazonawscom P 3306 u hhar –p MySQL [(none)]> call mysqlrds_stop_replication; + + | Message | + + | Slave is down or disabled | + + 1 row in set (102 sec) Query OK 0 rows affected (102 s ec) MySQL [(none)]> ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 12 Figure 13: Co nfirmation of replica status on the c onsole Using the following show slave status command save the replication status data in a local file You will need it later when setting up replication on the DB instance in VPC Prompt> mysql h classicrdsreadreplica1chd3laahf8xlus east 1rdsamazonawscom P 3306 u hhar p e "show slave status \G" > readreplicastatustxt Step 9 : Create Snapshot from the Read Replica (EC2 Classic) From the RDS Dashboard under Instances select the Read Replica that you just stopped and use Take Snapshot to create a DB snapshot ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 13 Figure 1: Taking a Snapshot of the read r eplica Step 10: Share the S napshot (Optional) If you are migrating across account s you need to share the snapshot From the Amazon RDS console under Snapshots select the recently created read replica and use Share Snapshot to make the snapshot available across account s This step is not required if the target VPC is in same account After sharing the snapshot log in to the new account after this step is finished Figure 2: Sharing a snapshot between accounts If you are migrating to a different account you need to set up a peering connection between the local VPC and the target VPC in the remote account ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 14 You will have to allow access to the security group that you used when you enabled the ClassicLink between the proxy server and VPC Figure 16: Creating a VPC peering connection Figure 17: Enabling ClassicLink over a peering connection ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 15 Figure 18: ClassicLink settings for p eering Step 11: Restore the S napshot in the Target VPC From the Amazon RDS console under Snapshots select the Classic R ead Replica and use Restore Snapshot to restore the Read Replica snapshot You should also select MultiAZ D eployment at this time ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 16 Figure 19: Restoring s napshot in target VPC Note: We highly recommend that you enable the Multi AZ Deployment option during initial creation of the new VPC DB instance If you bypass this step and convert to Multi AZ after switching your application over to the VPC DB instance you can experience a significant performance impact especially for write intensive database w orkloads Under Networking & Security set Publicly Accessible to Yes Next select the target VPC and appropriate subnet groups to ensure connectivity from the VPC RDS DB instance to the Classic Proxy Server ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 17 Figure 20: Setti ng VPC and subnet group on V PC DB instance Figure 3: Security group settings for cross account migration Step 12: Enable Backups on VPC RDS DB Instance By default backups are not enabled on read replicas From the Amazon RDS console under Instances select the VPC RDS DB instance and use Modify DB Instances to enable backups ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 18 Figure 422: Setting backup r etention Step 13 : Set up Replication Between VPC and EC2 Classic DB Instance s Retrieve the log file name and log position number from information saved in the previous step Prompt> cat readreplicastatustxt | grep Master_Log_File Master_Log_File: mysql binchangelog001993 Prompt> cat readreplicastatustxt | grep Exec_Master_Log_Pos Exec_Master_Log_Pos: 120 Connect to the VPC RDS DB instance via a mysql client through the ClassicLink proxy and set the EC2 Classic RDS DB instance as the replication master by issuing the rds_start_ replication command Use the private IP address of the EC2 Classic proxy server as well as the log position from the output above MySQL [(none)]> call mysqlrds_set_external_master(' <private ip addressofproxy>3306'replicationuser''classictoVPC123' 'mysql binchangelog001993 '1200); Query OK 0 rows affected (012 sec) MySQL [(none)]> call mysqlrds_start_replication; + + | Message | ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 19 + + | Slave running normally | + + 1 row in set (103 sec) Query OK 0 rows affected (103 sec) Verify the replication status on VPC Read Replica using the show slave status command MySQL [(none)]> show slave status \G; Step 14: Switch to the VPC RDS DB Instance After ensuring that the data in the VPC read replica has caught up to the EC2 Classic master c onfigure your application to stop writing data to the RDS DB instance in EC2 Classic After the replication lag has caught up c onnect to the VPC RDS DB instance via a mysql client and issue the rds_stop_ replication command MySQL [(none)]> call mysqlrds_stop_replication; At this point the VPC will stop replicating data from the master You can now promote the replica by connect ing to the VPC RDS DB instance via a mysql client and issuing the mysqlrds_rese t_external_master command MySQL [(none)]> call mysqlrds_reset_external_master; + + | Message | + + | Slave is down or disabled | + + 1 row in set (104 sec) + + | message | + + | Slave has been reset | ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 20 + + 1 row in set (312 sec) Query OK 0 rows affected (312 sec) You can now change the endpoint in your application to write to the VPC RDS DB instance Step 1 5: Take a Snapshot of the VPC RDS DB Instance From the Amazon RDS console under Instances select the VPC RDS DB instance and use Take Snapshot to capture a user snapshot for recovery purposes Figure 23: Taking a snapshot of the DB instance in VPC Step 1 6: Change the V PC DB Instance to be ‘Privately’ A ccessible (Optional) After the migration to the new VPC RDS DB instance is complete you can make it be privately (not publicly) accessible From the Amazon RDS console under Instances select the DB instance and click Modify Under Network & Security set Publicly Accessible to No ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 21 Figure 24: Setting instance to not be publicly accessible Step 1 7: Move the VPC DB Instance into P rivate Subnets (Optional) You can edit the DB Subnet Group s membership for your VPC RDS DB instance to move the VPC RDS DB i nstance to a private subnet In the following example the subnets 1721620/24 and 1721630/24 are private subnets Figure 25: Configuring subnet groups To change the private IP address of the RDS DB instance in the VPC you have to perform a scale up or scale down operation For example you could choose a larger instance size After the IP address changes you can scale again to the original instance size ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 22 Figure 26: Forcing a scale optimization Note: Alternat ively you can open a n AWS support request (https://awsamazoncom/contact us/) and the RDS Operations team will move the migrated VPC RDS instance to the private subnet Alternative Approaches There are other ways to approach migrating your Amazon RDS MySQL databases from EC2 Classic to EC2 VPC We cover two alternatives here One approach is to use AWS Database Migrati on Service and another is to specify a new VPC subnet for a DB instance using the AWS Management Console AWS Database Migration Service (DMS) An alternative approach to migration is to use AWS Database Migration Service (DMS) AWS DMS can migrate your data to and from the most widely used commercial and open source databases The service supports homogenous migrations such as Amazon RDS to Amazon RDS as well as heterogeneous migrations between different database platforms such as Orac le to Amazon Aurora or Microsoft SQL Server to MySQL The source database remains fully operational during the migration minimizing downtime to applications that rely on the database ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 23 Although AWS DMS can provide comprehensive ongoing replication of data it replicates only a limited amount of data definition language (DDL) AWS DMS doesn't propagate items such as indexes users privileges stored procedures and other database changes not directly related to table data In addition AWS DMS does not auto matically leverage RDS snapshots for the initial instance creation which can increase migration time Changing the VPC Subnet for a DB Instance Amazon RDS provides a feature that allows you to easily move an RDS DB instance in EC2 Classic to a VPC You specify a new VPC subnet for an existing DB instance in the Amazon RDS console the Amazon RDS API or the AWS command line tools To specify a new subnet group in the Amazon RDS console under Network & Security Subnet Group expand the drop down list and select the subnet group that you want from the list You can choose to apply this change immediately or during the next scheduled maintenance window However there are a few limitations with this approach:  The DB instance isn’t available during the move The move could take between 5 to 10 minutes  Moving Multi AZ instances to a VPC is n’t currently supported  Moving an instance with read replicas to a VPC isn’t currently supported ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 24 Figure 27: Specifying a new subnet group (in a VPC) for a database instance If these limitations are acceptable for your DB instances we recommend that you test this feature by restoring a snapshot of your database in EC2 Classic and then moving it to your VPC If these limitations are not acceptable then the ClassicLin k approach presented in this white paper will enable you to minimize downtime during the migration to your VPC Conclusion This paper highlights the key steps for migrating RDS MySQL instances from EC2 Classic to EC2 VPC environments using ClassicLink and RDS read replicas This approach enables minimal down time for production environments Contributors The following individuals and organizations contributed to this document:  Harshal Pimpalkhute Sr Product Manager Amazon EC2 Networ king  Jaime Lichauco Database Administrator Amazon RDS ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 25  Korey Knote Database Administrator Amazon RDS  Brian Welcker Product Manager Amazon RDS  Prahlad Rao Solutions A rchitect Amazon Web Services Further Reading For additional help please consult the following sources:  http://docsawsamazoncom/AmazonRDS/latest/UserGuide/USER_V PChtml  http://docsawsamazoncom/AmazonRDS/latest/UserGuide/USER_V PCWorkingWithRDSInstanceinaVPChtml  http://docsawsamazoncom/AmazonRDS/latest/ UserGuide/CHAP_M ySQLhtml  http://docsawsamazoncom/AmazonVPC/latest/UserGuide/VPC_Net workinghtml  http://docsawsamazoncom/AmazonVPC/latest/UserGuide/vpc classiclinkhtml Appendix A: Set Up Proxy Server in Classic Use an Amazon Machine Image ( AMI ) of your choice to launch an EC2 Classic instance The following example is based on the AMI Ubuntu Server 1404 LTS (HVM) Connect to the EC2 Classic instance and install NGINX: Prompt> sudo apt get update Prompt> sudo wget http://nginxorg/download/nginx 1912targz Prompt> sudo tar xvzf nginx 1912targz Prompt> cd nginx 1912 Prompt> sudo apt get install build essential Prompt> sudo apt get install libpcre3 libpcre3 dev Prompt> sudo apt get install zlib1g dev Prompt> sudo /configure withstream ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 26 Prompt> sudo make Prompt> sudo make install Edit the NGINX daemon file /etc/init/nginxconf : # /etc/init/nginxconf – Upstart file description "nginx http daemon" author “email" start on (filesystem and net deviceup IFACE=lo) stop on runlevel [!2345] env DAEMON=/usr/local/nginx/sbin/nginx env PID=/usr/local/nginx/logs/nginxpid expect fork respawn respawn limit 10 5 prestart script $DAEMON t if [ $? ne 0 ] then exit $? fi end script exec $DAEMON Edit the NGINX configuration file /usr/local/nginx/conf/nginxconf : # /usr/local/nginx/conf/nginxconf NGINX configuration file worker_processes 1; events { worker_connections 1024; } stream { ArchivedAmazon Web Services – Move Amazon RDS MySQL Databases to Amazon VPC using Amazon EC2 ClassicLink and Read Replicas Page 27 server { listen 3306; proxy_pass classicrdsinstance123456789012us east 1rdsamazonawscom:3306; } } From the command line start NGINX : Prompt> sudo initctl reload configuration Prompt> sudo initctl list | grep nginx Prompt> sudo initctl start nginx Configure NGINX port forwarding: # /usr/local/nginx/conf/nginxconf NGINX configuration file worker_processes 1; events { worker_connections 1024; } stream { server { listen 3306; proxy_pass classicrdsinstance123456789012us east 1rdsamazonawscom:3306; } }
General
Amazon_Aurora_Migration_Handbook
This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 1 Amazon Aurora Migration Handbook July 2020 This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 2 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 3 Contents Introduction 5 Database Migration Considerations 6 Migration Phases 7 Features and Compatibility 7 Performance 8 Cost 9 Availability and Durability 9 Planning and Testing a Database Migration 11 Homogeneous Migrations 11 Summary of Available Migration Methods 12 Migrating Large Databases to Amazon Aurora 15 Partition and Shard Consolidation on Amazon Aurora 16 MySQL and MySQL compatible Migration Options at a Glance 17 Migrating from Amazon RDS for MySQL 18 Migrating from MySQL Compatible Databases 23 Heterogeneous Migrations 26 Schema Migration 27 Data Migration 28 Example Migration Scenarios 28 SelfManaged Homogeneous Migrations 28 Multi Threaded Migration Using mydumper and myloader 39 Heterogeneous Migrations 45 Testing and Cutover 46 Migration Testing 46 This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 4 Cutover 47 Troubleshooting 49 Troubleshooting MySQL Specific Issues 49 Conclusion 54 Contributors 55 Further Reading 56 Document Revisions 56 This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 5 Abstract This paper outlines the best practices for planning executing and troubleshooting database migrations from MySQL compatible and non MySQL compatible database products to Amazon Aurora It also teaches Amazon Aurora database administrators how to diagnose and troubleshoot common migration and replication erro rs Introduc tion For decades traditional relational databases have been the primary choice for data storage and persistence These database systems continue to rely on monolithic architectures and were not designed to take advantage of cloud infrastructure These monolithic architectures present many challenges particularly in areas such as cost flexibility and availability In order to address these challenges AWS redesigned relational database for the cloud infrastructure and introduced Amazon Aurora Amazon Aurora is a MySQL compatible relational database engine that combines the speed availability and security of high end commercial databases with the simplicity and cost effectiveness of open source databases Aurora provides up to five times better performance than MySQL and comparable performance of high end commercial databases Amazon Aurora is priced at one tenth the cost of commercial engines Amazon Aurora is available through the Amazon Relational Database Service (Amazon RDS) platform Like other Amazon RDS databases Aurora is a fully managed database service With the Amazon RDS platform most database management tasks such as hardware provisioning softwa re patching setup configuration monitoring and backup are completely automated Amazon Aurora is built for mission critical workloads and is highly available by default An Aurora database cluster spans multiple Availability Zones (AZs) in a region providing out ofthebox durability and fault tolerance to your data across physical data centers An Availability Zone is composed of one or more highly available data centers operated by Amazon AZs are isolated from each other and are connected through lo w latency links Each segment of your database volume is replicated six times across these AZs This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 6 Aurora cluster volumes automatically grow as the amount of data in your database increases with no performance or availability impact so there is no need for estimating and provisioning large amount of database storage ahead of time An Aurora cluster volume can grow to a maximum size of 64 terabytes (TB) You are only charged for the space that you use in an Aurora cluster volume Aurora's automated backup capability supports point intime recovery of your data enabling you to restore your database to any second during your retention period up to the last five minutes Automated backups are stored in Amazon Simpl e Storage Service (Amazon S3) which is designed for 99999999999% durability Amazon Aurora backups are automatic incremental and continuous and have no impact on database performance For applications that need read only replicas you can create up to 15 Aurora Replicas per Aurora database with very low replica lag These replicas share the same underlying storage as the source instance lowering costs and avoiding the need to perform writes at the replica nodes Amazon Aurora is highly secure and all ows you to encrypt your databases using keys that you create and control through AWS Key Management Service (AWS KMS) On a database instance running with Amazon Aurora encryption data stored at rest in the underlying storage is encrypted as are the auto mated backups snapshots and replicas in the same cluster Amazon Aurora uses SSL (AES 256) to secure data in transit For a complete list of Aurora features see Amazon Aurora Given the rich feature se t and cost effectiveness of Amazon Aurora it is increasingly viewed as the go to database for mission critical applications Database Migration Considerations A database represents a critical component in the architecture of most applications Migrating t he database to a new platform is a significant event in an application’s lifecycle and may have an impact on application functionality performance and reliability You should take a few important considerations into account before embarking on your first migration project to Amazon Aurora Migrations are among the most time consuming and critical tasks handled by database administrators Although the task has become easier with the advent of managed This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 7 migration services such as AWS Database Migration Service large scale database migrations still require adequate planning and execution to meet strict compatibility and performance requirements Migration Phases Because database migrations tend to be complex we adv ocate taking a phased iterative approach Figure 1 Migration phases This paper examines the following major contributors to the success of every database migration project: • Factors that justify the migration to Amazon Aurora such as compatibility performance cost and high availability and durability • Best practices for choosing the optimal migration method • Best practices for planning and executing a migration • Migration troubleshooting hints This section discusses imp ortant considerations that apply to most database migration projects For an extended discussion of related topics see the Amazon Web Services (AWS) whitepaper Migrating Your Databases to Amazon Aurora Features and Compatibility Although most applications can be architected to work with many relational database engines you should make sure that your application works with Amazon Aurora Amazon Aurora is designed to be wire compatible with MySQL 5 55657 and 80 Therefore most of the code applications driver s and tools that are used today with MySQL databases can be used with Aurora with little or no change However certain MySQL features like the MyISAM storage engine are not available with Amazon Aurora Also due to the managed nature of the Aurora ser vice SSH This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 8 access to database nodes is restricted which may affect your ability to install third party tools or plugins on the database host For more details see Aurora on Amazon RDS in the Amazon Relational Database Service (Amazon RDS) User Guide Performance Performance is often the key motivation behind database migrations However deploying your database on Amazon Aurora can be beneficial even if your applications don’t have performance issues For example Amazon Aurora scalability features can greatly reduce the amount of engineering effort that is required to prepare your database platform for future traffic growth You should include benchmarks and performance evaluations in every migration project Therefore many successful database migration projects start with performance evaluations of the new database platform Although the RDS Aurora Performance Assessment Benchmarking paper gives you a decent idea of overall database performance these benchmarks do not emulate the data access patterns of your applications For more useful results test the database performance for time sensitive workloads by running your queries (or subset of your queries) on the new platform directly Consider these strategies : • If your current database is MySQL migrate to Amazon Aurora with downtime and performance test your database with a test or staging version of your application or by replaying the production workload • If you are on a non MySQL compliant engine you can selectively copy the busiest tables to Amazon Aurora and test your queries for t hose tables This gives you a good starting point Of course testing after complete data migration will provide a full picture of real world performance of your application on the new platform Amazon Aurora delivers comparable performance with commercia l engines and significant improvement over MySQL performance It does this by tightly integrating the database engine with an SSD based virtualized storage layer designed for database workloads This reduces writes to the storage system minimizes lock con tention and This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 9 eliminates delays created by database process threads Our tests with SysBench on r38xlarge instances show that Amazon Aurora delivers over 585000 reads per second and 107000 writes per second five times higher than MySQL running the same benchmark on the same hardware One area where Amazon Aurora significantly improves upon traditional MySQL is highly concurrent workloads In order to maximize your workload’s throughput on Amazon Aurora we recommend architecting your applications to driv e a large number of concurrent queries Cost Amazon Aurora provides consistent high performance together with the security availability and reliability of a commercial database at one tenth the cost Owning and running databases come with associated cost s Before planning a database migration an analysis of the total cost of ownership (TCO ) of the new database platform is imperative Migration to a new database platform should ideally lower the total cost of ownership while providing your applications with similar or better features If you are running an open source database engine (MySQL Postgres) your costs are largely related to hardware server management and database management activities However if you are running a commercial database engine (Oracle SQL Server DB2 etc) a significant portion of your cost is database licensing Amazon Aurora can even be more cost efficient than open source databases because its high scalability helps you reduce the number of database instances that are required to handle the same workload For more details see the Amazon RDS for Aurora Pricing page Availability and Durability High availability and disaster recovery are important considerations for databases Your application may already have very strict recovery time objective (RTO) and recovery point objective (RPO) requirements Amazon Aurora can help you meet or exceed your availability goals by having the following components: This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 10 1 Read replicas : Increase read throughput to support high volume application requests by creating up to 15 database Aurora replicas Amazon Aurora Replicas share the same underlying storage as the source inst ance lowering costs and avoiding the need to perform writes at the replica nodes This frees up more processing power to serve read requests and reduces the replica lag time often down to single digit milliseconds Aurora provides a reader endpoint so th e application can connect without having to keep track of replicas as they are added and removed Aurora also supports auto scaling where it automatically adds and removes replicas in response to changes in performance metrics that you specify Aurora sup ports cross region read replicas Cross region replicas provide fast local reads to your users and each region can have an additional 15 Aurora replicas to further scale local reads 2 Global Database : You can choose between Global Database which provides the best replication performance and traditional binlog based replication You can also set up your own binlog replication with external MySQL databases Amazon Aurora Global Database is de signed for globally distributed applications allowing a single Amazon Aurora database to span multiple AWS regions It replicates your data with no impact on database performance enables fast local reads with low latency in each region and provides disa ster recovery from region wide outages 3 Multi AZ: Aurora stores copies of the data in a DB cluster across multiple Availability Zones in a single AWS Region regardless of whether the instances in the DB cluster span multiple Availability Zones For more i nformation on Aurora see Managing an Amazon Aurora DB Cluster When data is written to the primary DB instance Aurora synchronously replicates the data across Availability Zones to six storage nodes associated with your cluster volume Doing so provides data redundancy eliminates I/O freezes and minimizes latency spikes during system backups Running a DB instance with high availability can enhance availability during planned system maintenance and help protect your databases against failure and Availability Zone disruption For more information about durability and availability features in Amazon Aurora see Aurora on Amazon RDS in the Amazon RDS User Guide This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 11 Planning and Testing a Database Migration After you determine that Amazon Aurora is the right fit for your application the next step is to decide on a migration approach and create a database migration plan Here are the suggested high level steps: 1 Review the available migration techniques described in this document and choose one that satisfies your requirements 2 Prepare a migration plan in the form of a step bystep checklist A checklist ensures that all migration steps are executed in the correct order and that the migration process flow can be controlled (eg suspended or resumed) without the risk of important steps be ing missed 3 Prepare a shadow checklist with rollback procedures Ideally you should be able to roll the migration back to a known consistent state from any point in the migration checklist 4 Use the checklist to perform a test migration and take note of the time required to complete each step If any missing steps are identified add them to the checklist If any issues are identified during the test migration address them and rerun the test migration 5 Test all rollback procedures If any rollback proced ure has not been tested successfully assume that it will not work 6 After you complete the test migration and become fully comfortable with the migration plan execute the migration Homogeneous Migrations Amazon Aurora was designed as a drop in replacement for MySQL 56 It offers a wide range of options for homogeneous migrations (eg migrations from MySQL and MySQL compatible databases) This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 12 Summary of Available Migration Methods This section lists common migration sources and the migration metho ds available to them in order of preference Detailed descriptions step bystep instructions and tips for advanced migration scenarios are available in subsequent sections Common method is widely adopted is built aurora read replica asynchronized wit h source master RDS or self managed MySQL databases Figure 1 Common migration sources and migration methods for Amazon Aurora Amazon RDS Snapshot Migration Compatible sources: • Amazon RDS for MySQL 56 • Amazon RDS for MySQL 51 and 55 (after upgrading to RDS for MySQL 56) Feature highlights: • Managed point andclick service available through the AWS Management Console • Best migration speed and ease of use of all migration methods • Can be used with binary log replication for near zero migration downtime For details see Migrating Data from a MySQL DB Instance to an Amazon Aurora DB Cluster in the Amazon RDS User Guide This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 13 Percona XtraBackup Compatible sources and limitations : • Onpremises or self managed MySQL 56 in EC2 can be migrated zero downtime migration • You can’t restore into an existing RDS instance using this method • The total size is limited to 6 TB • User accounts functions and stored procedures are not imported automatically Feature highlights: • Managed backup ingestion from Percona XtraBackup files stored in an Amazon Simple Storage Servi ce (Amazon S3) bucket • High performance • Can be used with binary log replication for near zero migration downtime For details see Migrating Data from MySQL by using an Amazon S3 bucket in the Amazon RDS User Guide SelfManaged Export/Import Compatible sources: • MySQL and MySQL compatible databases such as MySQL MariaDB or Percona Server including managed servers such as Amazon RDS for MySQL or MariaDB • NonMySQL compatible databases DMS Migration Compatible sources: • MySQL compatible and non MySQL compatible databases Feature highlights: • Supports heterogeneous and homogenous migrations This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 14 • Managed point andclick data migration service available through the AWS Management Console • Schemas must be migrated separately • Supports CDC replication for near zero migration downtime For details see What Is AWS Database Migration Service? in the AWS DMS User Guide For a heterogeneous migration where you are migrating from a database engine other than MySQL to a MySQL datab ase AWS DMS is almost always the best migration tool to use But for homogeneous migration where you are migrating from a MySQL database to a MySQL database native tools can be more effective Using Any MySQL Compatible Database as a Source for AWS DMS: Before you begin to work with a MySQL database as a source for AWS DMS make sure that you the following prerequisites These prerequisites apply to either self managed or Amazon managed sources You must have an account for AWS DMS that has the Replicati on Admin Role The role needs the following privileges: • Replication Client: This privilege is required for change data capture (CDC) tasks only In other words full loadonly tasks don’t require this privilege • Replication Slave: This privilege is required for change data capture (CDC) tasks only In other words full loadonly tasks don’t require this privilege • Super: This privilege is required only in MySQL versions before 566 DMS highlights for non MySQL compatible sources: • Requires manual schema conversion from source database format into MySQL compatible format • Data migration can be performed manually using a universal data format such as comma separated values (CSV) This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 15 • Change data capture (CDC) replication might be possible with third party tool s for near zero migration downtime Migrating Large Databases to Amazon Aurora Migration of large datasets presents unique challenges in every database migration project Many successful large database migration projects use a combination of the following strategies: • Migration with continuous replication: Large databases typically have extended downtime requirements while moving data from source to target To reduce the downtime you can first load baseline data from source to target and then enable replica tion (using MySQL native tools AWS DMS or third party tools) for changes to catch up • Copy static tables first: If your database relies on large static tables with reference data you may migrate these large tables to the target database before migratin g your active dataset You can leverage AWS DMS to copy tables selectively or export and import these tables manually • Multiphase migration: Migration of large database with thousands of tables can be broken down into multiple phases For example you may move a set of tables with no cross joins queries every weekend until the source database is fully migrated to the target database Note that in order to achieve this you need to make changes in your application to connect to two databases simultaneously while your dataset is on two distinct nodes Although this is not a common migration pattern this is an option nonetheless • Database clean up: Many large databases contain data and tables that remain unused In many cases developers and DBAs keep backup copies of tables in the same database or they just simply forget to drop unused tables Whatever the reason a database migration project p rovides an opportunity to clean up the existing database before the migration If some tables are not being used you might either drop them or archive them to another database You might also delete old data from large tables or archive that data to flat files This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 16 Partition and Shard Consolidation on Amazon Aurora If you are running multiple shards or functional partitions of your database to achieve high performance you have an opportunity to consolidate these partitions or shards on a single Aurora databa se A single Amazon Aurora instance can scale up to 64 TB supports thousands of tables and supports a significantly higher number of reads and writes than a standard MySQL database Consolidating these partitions on a single Aurora instance not only redu ces the total cost of ownership and simplify database management but it also significantly improves performance of cross partition queries • Functional partitions : Functional partitioning means dedicating different nodes to different tasks For example i n an e commerce application you might have one database node serving product catalog data and another database node capturing and processing orders As a result these partitions usually have distinct nonoverlapping schemas o Consolidation strateg y: Migrate each functional partition as a distinct schema to your target Aurora instance If your source database is MySQL compliant use native MySQL tools to migrate the schema and then use AWS DMS to migrate the data either one time or continuously using replication If your source database is non MySQL complaint use AWS Schema Conversion Tool to migrate the schemas to Aurora and use AWS DMS for one time load or continuous replication • Data shards : If you have the same schema with distinct sets of data acros s multiple nodes you are leveraging database sharding For example a high traffic blogging service may shard user activity and data across multiple database shards while keeping the same table schema This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 17 o Consolidation strategy : Since all shards share the sa me database schema you only need to create the target schema once If you are using a MySQL compliant database use native tools to migrate the database schema to Aurora If you are using a non MySQL database use AWS Schema Conversion Tool to migrate the database schema to Aurora Once the database schema has been migrated it is best to stop writes to the database shards and use native tools or an AWS DMS one time data load to migrate an individual shard to Aurora If writes to the application cannot be stopped for an extended period you might still use AWS DMS with replication but only after proper planning and testing MySQL and MySQL compatible Migration Options at a Glance Source Database Type Migration with Downtime Near zero Downtime Migration Amazon RDS MySQL Option 1: RDS snapshot migration Option 2: Manual migration using native tools* Option 3: Schema migration using native tools and data load using AWS DMS Option 1: Migration using native tools + binlog replication Option 2: RDS snapshot migration + binlog replication Option 3: Schema migration using native tools + AWS DMS for data movement MySQL Amazon EC2 or onpremises Option 1: Schema migration with native tools + AWS DMS for data load Option 1: Schema migration using native tools + A WS DMS to move data Oracle/SQL server Option 1: AWS Schema Conversion Tool + AWS DMS (recommended) Option 2: Manual or third party tool for schema conversion + manual or thirdparty data load in target Option 1: AWS Schema Conversion Tool + AWS DMS (recommended) Option 2: Manual or third party tool for schema conversion This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 18 Migrating from Amazon RDS for MySQL If you are migrating from an RDS MySQL 56 database (DB) instance the recommended approach is to use the snapshot migration feature Snapshot m igration is a fully managed point andclick feature that is available through the AWS Management Console You can use it to migrate an RDS MySQL 56 DB instance snapshot into a new Aurora DB cluster It is the fastest and easiest to use of all the migrati on methods described in this document For more information about the snapshot migration feature see Migrating Data to an Amazon Aurora DB Cluster in the Amazon RDS User Guide This section provides ideas for projects that use the snapshot migration feature The liststyle layout in our example instructions can help you prepare your own migration checklist Estimating Space Requirements for Snapshot Migration When you migrate a snapshot of a MySQL DB instance to an Aurora DB cluster Aurora uses an Am azon Elastic Block Store (Amazon EBS) volume to format the data from the snapshot before migrating it There are some cases where additional space is needed to format the data for migration The two features that can potentially cause space issues during m igration are MyISAM tables and using the ROW_FORMAT=COMPRESSED option If you are not using either of these features in your source database then you can skip this section because you should not have space issues During migration MyISAM tables are conve rted to InnoDB and any compressed tables are uncompressed Consequently there must be adequate room for the additional copies of any such tables The size of the migration volume is based on the allocated size of the source MySQL database that the snapsho t was made from Therefore if you have MyISAM or compressed tables that make up a small percentage of the overall database size and there is available space in the original database then migration should succeed without encountering any space issues How ever if the original database would not have enough room to store a copy of converted MyISAM tables as well as another (uncompressed) copy of compressed tables then the migration volume will not be big enough In this situation you would need to modify the source Amazon RDS MySQL This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 19 database to increase the database size allocation to make room for the additional copies of these tables take a new snapshot of the database and then migrate the new snapshot When migrating data into your DB cluster observe the following guidelines and limitations: • Although Amazon Aurora supports up to 64 TB of storage the process of migrating a snapshot into an Aurora DB cluster is limited by the size of the Amazon EBS volume of the snapshot and therefore is limited to a m aximum size of 6 TB Non MyISAM tables in the source database can be up to 6 TB in size However due to additional space requirements during conversion make sure that none of the MyISAM and compressed tables being migrated from your MySQL DB instance exc eed 3 TB in size For more information see Migrating Data from an Amazon RDS MySQL DB Instance to an Amazon Aurora MySQL DB Cluster You might want to modify your d atabase schema (convert MyISAM tables to InnoDB and remove ROW_FORMAT=COMPRESSED ) prior to migrating it into Amazon Aurora This can be helpful in the following cases: • You want to speed up the migration process • You are unsure of how much space you need t o provision • You have attempted to migrate your data and the migration has failed due to a lack of provisioned space Make sure that you are not making these changes in your production Amazon RDS MySQL database but rather on a database instance that was restored from your production snapshot For more details on doing this see Reducing the Amount of Space Required to Migrate Data into Amazon Aurora in the Amazon RDS User Guide The naming conventions used in this section are as follows: • Source RDS DB instance refers to the RDS MySQL 56 DB instance that you are migrating from • Target Aurora DB cluster refers to the Aurora DB cluster that you are migrating to This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 20 Migrating with Downtime When migration downtime is acceptable you can use the following high level procedure to migrate an RDS MySQL 56 DB instance to Amazon Aurora: 1 Stop all write activity against the source RDS DB instance Database downtime begins here 2 Take a snapshot of the source RDS DB instance 3 Wait until the snapshot shows as Available in the AWS Management Console 4 Use the AWS Management Console to migrate the snapshot to a new Aurora DB cluster For instructions see Migra ting Data to an Amazon Aurora DB Cluster in the Amazon RDS User Guide 5 Wait until the snapshot migration finishes and the target Aurora DB cluster enters the Available state The time to migrate a snapshot primarily depends on the size of the database You can determine it ahead of the production migration by running a test migration 6 Configure applications to connect to the newly created target Aurora DB cluster instead of the source RDS DB instance 7 Resume write activity against the target Aurora DB cluster Database downtime ends here Migrating with Near Zero Downtime If prolonged migration downtime is not acceptable you can perform a near zero downtime migration through a combination of snapshot migration and binary log replication Perform the high level procedure as follows: 1 On the source RDS DB instance ensure that a utomated backups are enabled 2 Create a Read Replica of the source RDS DB instance 3 After you create the Read Replica manually stop replication and obtain binary log coordinates 4 Take a snapshot of the Read Replica This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 21 5 Use the AWS Management Console to migrat e the Read Replica snapshot to a new Aurora DB cluster 6 Wait until snapshot migration finishes and the target Aurora DB cluster enters the Available state 7 On the target Aurora DB cluster configure binary log replication from the source RDS DB instance using the binary log coordinates that you obtained in step 3 8 Wait for the replication to catch up that is for the replication lag to reach zero 9 Begin cut over by stopping all write activity against the source RDS DB instance Application downt ime begins here 10 Verify that there is no outstanding replication lag and then configure applications to connect to the newly created target Aurora DB cluster instead of the source RDS DB instance 11 Complete cut over by resuming write activity Application downtime ends here 12 Terminate replication between the source RDS DB instance and the target Aurora DB cluster For a detailed description of this procedure see Replication Between Aurora and MySQL or Between Aurora and Another Aurora DB Cluster in the Amazon RDS Us er Guide If you don’t want to set up replication manually you can also create an Aurora Read Replica from a source RDS MySQL 56 DB instance by using the RDS Management Console The RDS automation does the following: 1 Creates a snapshot of the source RDS DB instance 2 Migrates the snapshot to a new Aurora DB cluster 3 Establishes binary log replication between the source RDS DB instance and the target Aurora DB cluster After replication is established you can complete the cut over steps as described previously This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 22 Migrating from Amazon RDS for MySQL Engine Versions Other than 56 Direct snapshot migration is only supported for RDS MySQL 56 DB instance snapshots You can migrate RDS MySQL DB instances that are running other engine versions by u sing the following procedures RDS for MySQL 51 and 55 Follow these steps to migrate RDS MySQL 51 or 55 DB instances to Amazon Aurora: 1 Upgrade the RDS MySQL 51 or 55 DB instance to MySQL 56 • You can upgrade RDS MySQL 55 DB instances directly to MySQL 56 • You must upgrade RDS MySQL 51 DB instances to MySQL 55 first and then to MySQL 56 2 After you upgrade the instance to MySQL 56 test your applications against the upgraded database and address any compatibility or performance co ncerns 3 After your application passes the compatibility and performance tests against MySQL 56 migrate the RDS MySQL 56 DB instance to Amazon Aurora Depending on your requirements choose the Migrating with Downtime or Migrating with Near Zero Downtime procedures described earlier For more information about upgrading RDS MySQL engine versions see Upgrading the MySQL DB Engine in the Amazon RDS User Guide RDS for MySQL 57 For migrations from RDS MySQL 57 DB instances the snapshot migration approach is not supported because the database engine version ca n’t be downgraded to MySQL 56 In this case we recommend a manual dump andimport procedure for migrating MySQL compatible databases described later in this whitepaper Such a procedure may be slower than snapshot migration but you can still perform it with near zero downtime using binary log replication This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 23 Migrating from MySQL Compatible Databases Moving to Amazon Aurora is still a relatively simple process if you are migrating from an RDS MariaDB instance an RDS MySQL 57 DB instance or a se lf managed MySQL compatible database such as MySQL MariaDB or Percona Server running on Amazon Elastic Compute Cloud (Amazon EC2) or on premises There are many techniques you can use to migrate your MySQL compatible database workload to Amazon Aurora This section describes various migration options to help you choose the most optimal solution for your use case Percona XtraBackup Amazon Aurora supports migration from Percona XtraBackup files that are stored in an Amazon S3 bucket Migrating from binar y backup files can be significantly faster than migrating from logical schema and data dumps using tools like mysqldump Logical imports work by executing SQL commands to re create the schema and data from your source database which involves considerable processing overhead By comparison you can use a more efficient binary ingestion method to ingest Percona XtraBackup files This migration method is compatible with source servers using MySQL versions and 56 Migrating from Percona XtraBackup files invol ves three steps: 1 Use the innobackupex tool to create a backup of the source database 2 Upload backup files to an Amazon S3 bucket 3 Restore backup files through the AWS Management Console For details and step bystep instructions see Migrating data from MySQL by using an Amazon S3 Bucket in the Amazon RDS User Guide SelfManaged Export/Import You can use a variety of export/import tools to migrate your data and schema to Amazon Aurora The tools can be described as “MySQL native” because they are either part of a MySQL project or were designed specifically for MySQL compatible databases This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 24 Examples of native migration tools include the following: 1 MySQL utilities such as mysqldump mysqlimport and mysql command line client 2 Third party utilities such as mydumper and myloader For details see this mydumper project page 3 Builtin MySQL commands such as SELECT INTO OUTFILE and LOAD DATA INFILE Native tools are a great option for power users or database administrators who want to maintain full control over the migration process Self managed migrations involve more steps and are typically slower than RDS snapshot or Percona XtraBackup migrations but they offer the best compatibility and flexibility For an in depth discussion of the best practices for self managed migrations see the AWS whitepaper Best Practices for Migrating MySQ L Databases to Amazon Aurora You can execute a self managed migration with downtime (without replication) or with nearzero downt ime (with binary log replication) SelfManaged Migration with Downtime The high level procedure for migrating to Amazon Aurora from a MySQL compatible database is as follows: 1 Stop all write activity against the source database Application downtime begin s here 2 Perform a schema and data dump from the source database 3 Import the dump into the target Aurora DB cluster 4 Configure applications to connect to the newly created target Aurora DB cluster instead of the source database 5 Resume write activity Appli cation downtime ends here For an in depth discussion of performance best practices for self managed migrations see the AWS whitepaper Best Practices for Migrating MySQL Databases to Amazon Aurora This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 25 SelfManaged Migration with Near Zero Downtime The following is the high level procedure for near zero downtime migration into Amazon Aurora from a MySQL compatible database: 1 On the source database enable binary logging and ensure that binary log files are retained for at least the amount of time that is required t o complete the remaining migration steps 2 Perform a schema and data export from the source database Make sure that the export metadata contains binary log coordinates that are required to establish replication at a later time 3 Import the dump into the tar get Aurora DB cluster 4 On the target Aurora DB cluster configure binary log replication from the source database using the binary log coordinates that you obtained in step 2 5 Wait for the replication to catch up that is for the replication lag to reach zero 6 Stop all write activity against the source database instance Application downtime begins here 7 Double check that there is no outstanding replication lag Then configure applications to connect to the newly created target Aurora DB cluster inst ead of the source database 8 Resume write activity Application downtime ends here 9 Terminate replication between the source database and the target Aurora DB cluster For an in depth discussion of performance best practices of self managed migrations see the AWS whitepaper Best Practices for Mig rating MySQL Databases to Amazon Aurora AWS Database Migration Service AWS Database Migration Service is a managed database migra tion service that is available through the AWS Management Console It can perform a range of tasks from simple migrations with downtime to near zero downtime migrations using CDC replication This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 26 AWS Database Migration Service may be the preferred option if y our source database can’t be migrated using methods described previously such as the RDS MySQL 56 DB snapshot migration Percona XtraBackup migration or native export/import tools AWS Database Migration Service might also be advantageous if your migrat ion project requires advanced data transformations such as the following : • Remapping schema or table names • Advanced data filtering • Migrating and replicating multiple database servers into a single Aurora DB cluster Compared to the migration methods describe d previously AWS DMS carries certain limitations: • It does not migrate secondary schema objects such as indexes foreign key definitions triggers or stored procedures Such objects must be migrated or created manually prior to data migration • The DMS CDC replication uses plain SQL statements from binlog to apply data changes in the target database Therefore it might be slower and more resource intensive than the native master/slave binary log replication in MySQL For step bystep instructions on how to migrate your database using AWS DMS see the AWS whitepaper Migrating Your Databases to Amazon Aurora Heterogeneous Migrations If you a re migrating a non MySQL compatible database to Amazon Aurora several options can help you complete the project quickly and easily A heterogeneous migration project can be split into two phases: 1 Schema migration to review and convert the source schema objects (eg tables procedures and triggers) into a MySQL compatible representation 2 Data migration to populate the newly created schema with data contained in the source database Optionally you can use a CDC replication for near zero downtime migratio n This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 27 Schema Migration You must convert database objects such as tables views functions and stored procedures to a MySQL 56 compatible format before you can use them with Amazon Aurora This section describes two main options for converting schema objects Whichever migration method you choose always make sure that the converted objects are not only compatible with Aurora but also follow MySQL’s best practices for schema design AWS Schema Conversion Tool The AWS Schema Conversion Tool (AWS SCT) can great ly reduce the engineering effort associated with migrations from Oracle Microsoft SQL Server Sybase DB2 Azure SQL Database Terradata Greenplum Vertica Cassandra and PostgreSQL etc AWS SCT can automatically convert the source database schema and a majority of the custom code including views stored procedures and functions to a format compatible with Amazon Aurora Any code that can’t be automatically converted is clearly marked so that it can be processed manually For more information see the AWS Schema Conversion Tool User Guide For step by step instructions on how to convert a non MySQL compatible schema using the AWS Schema Conversion Tool see t he AWS whitepaper Migrating Your Databases to Amazon Aurora Manual Schema Migration If your source database is not in the scope of SCT comp atible databases you can either manually rewrite your database object definitions or use available third party tools to migrate schema to a format compatible with Amazon Aurora Many applications use data access layers that abstract schema design from business application code In such cases you can consider redesigning your schema objects specifically for Amazon Aurora and adapting the data access layer to the new schema This might require a greater upfront engineering effort but it allows the new s chema to incorporate all the best practices for performance and scalability This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 28 Data Migration After the database objects are successfully converted and migrated to Amazon Aurora it’s time to migrate the data itself The task of moving data from a non MySQL compatible database to Amazon Aurora is best done using AWS DMS AWS DMS supports initial data migration as well as CDC replication After the migration task starts AWS DMS manages all the complexities of the process including data type transformations compression and parallel data transfer The CDC functionality automatically replicates any changes that are made to the source database during the migration process For more information see the AWS Database Migration Service User Guide For step bystep instructions on how to migrate data from a non MySQL compatible database into an Amazon Aurora cluster using AWS DMS see the AWS whitepaper Migrating Your Databases to Amazon Aurora Example Migration Scenarios There are several approaches for performing both self managed homogeneo us migration and heterogeneous migrations SelfManaged Homogeneous Migrations This section provides examples of migration scenarios from self managed MySQL compatible databases to Amazon Aurora For an in depth discussion of homogeneous migration best pra ctices see the AWS whitepaper Best Practices for Migrating MySQL Databases to Amazon Aurora Note: If you are migrating from an Amazon RDS MySQL DB instance you can use the RDS snapshot migration feature instead of doing a self managed migration See the Migrating from Amazon RDS for MySQL section for more details This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 29 Migrating Using Percona XtraBackup One option for migrating data from MySQL to Amazon Aurora is to use the Percona XtraBackup utility For more information about usin g Percona Xtrabackup utility see Migrating Data from an External MySQL Database in the Amazon RDS User Guide Approach This scenario uses the Percona XtraBackup utility to take a binary backup of the source MySQL database The backup files are then uploaded to an Amazon S3 bucket and restored into a new Amazon Aurora DB cluster When to Use You can adopt this approach for small to large scale migrations when the following conditions are met: • The source database is a MySQL 55 or 56 database • You have administrative system level access to the source database • You are migrating database servers in a 1 to1 fashion: one source MySQL server becomes one new Aurora DB cluster When to Consider Other Options This approach is not currently supported in the following scenarios • Migrating into existing Aurora DB clusters • Migrating multiple source MySQL servers into a single Aurora DB cluster Examples For a step bystep example see Migrating Data from an External MySQL Database in the Amazon RDS User Guide OneStep Migration Using mysqldump Another migration option uses the mysqldump utility to migrate data from MySQL to Amazon Aurora This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 30 Approach This scenario uses the mysqldump utility to export schema and data definitions from the source server and import them into the target Auro ra DB cluster in a single step without creating any intermediate dump files When to Use You can adopt this approach for many small scale migrations when the following conditions are met: • The data set is very small (up to 1 2 GB) • The network connection between source and target databases is fast and stable • Migration performance is not critically important and the cost of re trying the migration is very low • There is no need to do any intermediate schema or data transformations When to Cons ider Other Options This approach might not be an optimal choice if any of the following conditions are true • You are migrating from an RDS MySQL DB instance or a self managed MySQL 55 or 56 database In that case you might get better results with snapsho t migration or Percona XtraBackup respectively For more • details see the Migrating from Amazon RDS for MySQL and Percona XtraBackup sections • It is impossible to establish a network connection from a single client instance to source and target databases due to network architecture or security considerations • The network connection between source and target databases is unstable or very slow • The data set is larger than 10 GB • Migration performance is critically important • An intermediate dump file is required in order to perform schema or data manipulations before you can import the schema/data This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 31 Notes For the sake of simplicity this scenario assumes the following: 1 Migration commands are executed from a client instance running a Linux operating system 2 The source server is a self managed MySQL database (eg running on Amazon EC2 or on premises) that is configured to allow connections from the client instance 3 The target Aurora DB cluster already exists and is configured to allow connections from the client instance If you don’t yet have an Aurora DB cluster review the stepbystep cluster launch instructions in the Amazon RDS User Guide 17 4 Export from the source database is performed using a privileged super user MySQL ac count For simplicity this scenario assumes that the user holds all permissions available in MySQL 5 Import into Amazon Aurora is performed using the Aurora master user account that is the account whose name and password were specified during the cluster launch process Examples The following command when filled with the source and target server and user information migrates data and all objects in the named schema(s) between the source and t arget servers mysqldump host=<source_server_address> \ user=<source_user> \ password=<source_user_password> \ databases <schema(s)> \ singletransaction \ compress | mysql host=<target_cluster_endpoint> \ user=<target_user> \ password=<target_user_password> This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 32 Descriptions of the options and option v alues for the mysqldump command are as follows: • <source_server_address> : DNS name or IP address of the source server • <source_user> : MySQL user account name on the source server • <source_user_password> : MySQL user account password on the source server • <schema(s)> : One or more schema names • <target_cluster_endpoint> : Cluster DNS endpoint of the target Aurora cluster • <target_user> : Aurora master user name • <target_user_password> : Aurora master user password • single transaction : Enforces a consi stent dump from the source database Can be skipped if the source database is not receiving any write traffic • compress : Enables network data compression See the mysqldump docume ntation for more details Example: mysqldump host=source mysqlexamplecom \ user=mysql_admin_user \ password=mysql_user_password \ databases schema1 \ singletransaction \ compress | mysql host=auroracluster xxxxxamazonawscom \ user=aurora_master_user \ password=aurora_user_password Note: This migration approach requires application downtime while the dump and import are in progress You can avoid application downtime by extending the scenario with MySQL binary log replication See the Self Managed Migration with Near Zero Downtime section for more details This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 33 FlatFile Migration Using Files in CSV Format This scenario demonstrates a schema and data migration using flat file dumps that is dumps that do not encapsulate data in SQL statements Many database administrators prefer to use flat files over SQL format files for the following reasons: • Lack of SQL encap sulation results in smaller dump files and reduces processing overhead during import • Flatfile dumps are easier to process using OS level tools; they are also easier to manage (eg split or combine) • Flatfile formats are compatible with a wide range of database engines both SQL and NoSQL Approach The scenario uses a hybrid migration approach: • Use the mysqldump utility to create a schema only dump in SQL format The dump describes the structure of schema objects (eg tables views and functions) but does not contain data • Use SELECT INTO OUTFILE SQL commands to create dataonly dumps in CSV format The dumps are created in a one filepertable fashion and contain table data only (no schema definitions) The import phase can be executed in two ways: • Traditional approach: Transfer all dump files to an Amazon EC2 instance located in the same AWS Region and Availability Zone as the target Aurora DB cluster After transferring the dump files you can import them into Amazon Aurora using the mysql command line client and LOAD DATA LOCAL INFILE SQL commands for SQL format schema dumps and the flat file data dumps respectively This is the approach that is demonstrated later in this section This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 34 • Alternative approach: Transfer the SQL format schema dumps t o an Amazon EC2 client instance and import them using the mysql command line client You can transfer the flat file data dumps to an Amazon S3 bucket and then import them into Amazon Aurora using LOAD DATA FROM S3 SQL commands For more information including an example of loading data from Amazon S3 see Migrating Data from MySQL by Using an Amazon S3 Bucket in the Amazon RDS User Guide When to Use You can adopt this approach for most migration projects where performance and flexibility are important: • You can dump small data sets and import them one table at a time You can also run multiple SELECT INTO OUTFILE and LOAD DATA INFILE operations in parallel for best performance • Data that is stored in flat file dumps is not encapsulated in database specific SQL statements Therefore it can be handled and processed easily by the systems participating in the data exchange When to Consider Other Options You might choose not to use this approach if any of the following conditions are true: • You are migrating from an RDS MySQL DB instance or a self managed MySQL 56 database In that case you might get better results with snapshot migration or Percona XtraBackup respectively See the Migrating from Amazon RDS for MySQL and Percona XtraBackup sections for more details • The data set is very small and does not require a high performance migration approach • You want the migration process to be as simple as possible and you don’t require any of the performance and flexibility benefits listed earlier Notes To simplify the demons tration this scenario assumes the following: This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 35 1 Migration commands are executed from client instances running a Linux operating system: o Client instance A is located in the source server’s network o Client instance B is located in the same Amazon VPC Availability Zone and Subnet as the target Aurora DB cluster 2 The source server is a self managed MySQL database (eg running on Amazon EC2 or on premises) configured to allow connections from client instance A 3 The target Aurora DB cluster already exist s and is configured to allow connections from client instance B If you don’t have an Aurora DB cluster yet review the stepbystep cluster launch instruct ions in the Amazon RDS User Guide 4 Communication is allowed between both client instances 5 Export from the source database is performed using a privileged super user MySQL account For simplicity this scenario assumes that the user holds all permissions available in MySQL 6 Import into Amazon Aurora is performed using the master user account that is the account whose name and password were specified during the cluster launch process Note that this migration approach requires application downtime while t he dump and import are in progress You can avoid application downtime by extending the scenario with MySQL binary log replication See the Self Managed Migration with Near Zero Downtime sectio n for more details Examples In this scenario you migrate a MySQL schema named myschema The first step of the migration is to create a schema only dump of all objects mysqldump host=<source_server_address> \ user=<source_user> \ password=<source_user_password> \ databases <schema(s)> \ singletransaction \ nodata > myschema_dumpsql This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 36 Descriptions of the options and option values for the mysqldump command are as follows: • <source_server_address> : DNS name or IP address of th e source server • <source_user> : MySQL user account name on the source server • <source_user_password> : MySQL user account password on the source server • <schema(s)> : One or more schema names • <target_cluster_endpoint> : Cluster DNS endpoint of the target Aur ora cluster • <target_user> : Aurora master user name • <target_user_password> : Aurora master user password • single transaction : Enforces a consistent dump from the source database Can be skipped if the source database is not receiving any write traffic • nodata : Creates a schema only dump without row data For more details see mysqldump in the MySQL 56 Reference Manual Example: admin@clientA:~$ mysqldump host=11223344 user=root \ password=pAssw0rd databases myschema \ singletransaction nodata > myschema_dump_schema_onlysql After you complete the schema only dump you can obtain data dumps for each table After logging in to the source MyS QL server use the SELECT INTO OUTFILE statement to dump each table’s data into a separate CSV file admin@clientA:~$ mysql host=11223344 user=root password=pAssw0rd mysql> show tables from myschema; + + This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 37 | Tables_in_myschema | + + | t1 | | t2 | | t3 | | t4 | + + 4 rows in set (000 sec) mysql> SELECT * INTO OUTFILE '/home/admin/dump/myschema_dump_t1csv' FIELDS TERMINATED BY '' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY ' \n' FROM myschemat1; Query OK 4194304 rows affected (235 sec) (repeat for all remaining tables) For more information about SELECT INTO statement syntax see SELECT INTO Syntax in the MySQL 56 Reference Manual After you complete all dump operations the /home/admin/dump directory contains five files: one schema only dump and four data dumps on e per table admin@clientA:~/dump$ ls sh1 total 685M 40K myschema_dump_schema_onlysql 172M myschema_dump_t1csv 172M myschema_dump_t2csv 172M myschema_dump_t3csv 172M myschema_dump_t4csv Next you compress and transfer the files to client instance B located in the same AWS Region and Availability Zone as the target Aurora DB cluster You can use any file transfer method available to you (eg FTP or Amazon S3) This example uses SCP with SSH private key authentication admin@clientA:~/dump$ gzip mysc hema_dump_*csv admin@clientA:~/dump$ scp i sshkeypem myschema_dump_* \ <clientB_ssh_user>@<clientB_address>:/home/ec2 user/ This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 38 After transferring all the files you can decompress them and import the schema and data Import the schema dump first because a ll relevant tables must exist before any data can be inserted into them admin@clientB:~/dump$ gunzip myschema_dump_*csvgz admin@clientB:~$ mysql host=<cluster_endpoint> user=master \ password=pAssw0rd < myschema_dump_schema_onlysql With the schem a objects created the next step is to connect to the Aurora DB cluster endpoint and import the data files Note the following: • The mysql client invocation includes a localinfile parameter which is required to enable support for LOAD DATA LOCAL INFILE commands • Before importing data from dump files use a SET command to disable foreign key constraint checks for the duration of the database session Disabling foreign key checks not only improves import performance but it also lets you import data files in arbitrary order admin@clientB:~$ mysql localinfile host=<cluster_endpoint> \ user=master password=pAssw0rd mysql> SET foreign_key_checks = 0; Query OK 0 rows affected (000 sec) mysql> LOAD DATA LOCAL INFILE '/home/ec2 user/myschema_dump_t1csv' > INTO TABLE myschemat1 > FIELDS TERMINATED BY '' OPTIONALLY ENCLOSED BY '"' > LINES TERMINATED BY ' \n'; Query OK 4194304 rows affected (1 min 266 sec) Records: 4194304 Deleted: 0 Skipped: 0 Warnings: 0 (repeat for all rema ining CSV files) This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 39 mysql> SET foreign_key_checks = 1; Query OK 0 rows affected (000 sec) That’s it you have imported the schema and data dumps into the Aurora DB cluster You can find more tips and best practices for self managed migrations in the AWS whitepaper Best Practices for Migrating MySQL Databases to Amazon Aurora Multi Threaded Migration Using mydumper and myloader Mydumper and myloader are popular open source MySQL export/import tools designed to address performance issues associated with the lega cy mysqldump program They operate on SQL format dumps and offer advanced features such as the following: • Dumping and loading data using multiple parallel threads • Creating dump files in a file pertable fashion • Creating chunked dumps in a multiple filespertable fashion • Dumping data and metadata into separate files for easier parsing and management • Configurable transaction size during import • Ability to schedule dumps in regular intervals For more details see the MySQL Data Dumper project page Approach The scenario uses the mydumper and myloader tools to perform a multi threaded schema and data migration without the need to manually invoke any SQL commands or desig n custom migration scripts The migration is performed in two steps: This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 40 1 Use the mydumper tool to create a schema and data dump using multiple parallel threads 2 Use the myloader tool to process the dump files and import them into an Aurora DB cluster also in multi threaded fashion Note that mydumper and myloader might not be readily available in the package repository of your Linux/Unix distribution For your convenience the scenario also shows how to build the tools from source code When to Use You can adopt this approach in most migration projects: • The utilities are easy to use and enable database users to perform multi threaded dumps and imports without the need to develop custom migration scripts • Both tools are highly flexible and have reasonable co nfiguration defaults You can adjust the default configuration to satisfy the requirements of both small and large scale migrations When to Consider Other Options You might decide not to use this approach if any of the following conditions are true: • You are migrating from an RDS MySQL DB instance or a self managed MySQL 55 or 56 database In that case you might get better results with snapshot migration or Percona XtraBackup respectively See the Migrating from Amazon RDS for MySQL and Percona XtraBackup sections for more details • You can’t use third party software because of operating system limitations • Your data transformation processes require intermediate dump files in a flat file forma t and not an SQL format Notes To simplify the demonstration this scenario assumes the following: This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 41 1 You execute the migration commands from client instances running a Linux operating system: a Client instance A is located in the source server’s network b Clien t instance B is located in the same Amazon VPC Availability Zone and Subnet as the target Aurora cluster 2 The source server is a self managed MySQL database (eg running on Amazon EC2 or on premises) configured to allow connections from client instance A 3 The target Aurora DB cluster already exists and is configured to allow connections from client instance B If you don’t have an Aurora DB cluster yet review the stepbystep cluster launch instructions in the Amazon RDS User Guide 4 Communication is allowed between both client instances 5 You perform the export from the source database using a privileged super user MySQL account For simplicity the example assumes that the user holds all permissions available in MySQL 6 You perform the import into Amazon Aurora using the master user account that is the account whose n ame and password were specified during the cluster launch process 7 The Amazon Linux 2016033 operating system is used to demonstrate the configuration and compilation steps for mydumper and myloader Note : This migration approach requires application down time while the dump and import are in progress You can avoid application downtime by extending the scenario with MySQL binary log replication See the Self Managed Migration with Near Zero Dow ntime section for more details Examples (Preparing Tools) The first step is to obtain and build the mydumper and myloader tools See the MySQL Data Dumper project page for up todate download links and to ensure that tools are prepared on both client instances This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 42 The utilities depend on several packages that you should install first [ec2user@clientA ~]$ sudo yum install glib2 devel mysql56 \ mysql56devel zlib devel pcre devel openssl devel g++ gcc c++ cmake The next steps involve creating a directory to hold the program sources and then fetching and unpacking the source archive [ec2user@clientA ~]$ mkdir mydumper [ec2 user@clientA ~]$ cd mydumper/ [ec2user@clientA mydumper]$ wget https://launchp adnet/mydumper/09/091/+download/mydumper 091targz 20160629 21:39:03 (153 KB/s) ‘mydumper 091targz’ saved [44463/44463] [ec2user@clientA mydumper]$ tar zxf mydumper 091targz [ec2user@clientA mydumper]$ cd mydumper 091 Next you b uild the binary executables [ec2user@clientA mydumper 091]$ cmake (…) [ec2user@clientA mydumper 091]$ make Scanning dependencies of target mydumper [ 25%] Building C object CMakeFiles/mydumperdir/mydumperco [ 50%] Building C object CMakeFiles/mydumperdir/server_detectco [ 75%] Building C object CMakeFiles/mydumperdir/g_unix_signalco Linking C executable mydumper [ 75%] Built target mydumper Scanning dependencies of target myloader [100%] Building C object CMakeFiles/myloaderdi r/myloaderco Linking C executable myloader [100%] Built target myloader This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 43 Optionally you can move the binaries to a location defined in the operating system $PATH so that they can be executed more conveniently [ec2user@clientA mydumper 091]$ sudo mv mydumper /usr/local/bin/mydumper [ec2user@clientA mydumper 091]$ sudo mv myloader /usr/local/bin/myloader As a final step confirm that both utilities are available in the system [ec2user@clientA ~]$ mydumper V mydumper 091 built against MySQL 5631 [ec2user@clientA ~]$ myloader V myloader 091 built against MySQL 5631 Examples (Migration) After completing the preparation steps you can perform the migration The mydumper command uses the following basic syntax mydumper h <source_serve r_address> u <source_user> \ p <source_user_password> B <source_schema> \ t <thread_count> o <output_directory> Descriptions of the parameter values are as follows: • <source_server_address> : DNS name or IP address of the source server • <source_user> : MySQL user account name on the source server • <source_user_password> : MySQL user account password on the source server • <source_schema> : Name of the schema to dump • <thread_count> : Number of parallel threads used to dump the data This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 44 • <output_directory> : Name of the directory where dump files should be placed Note : mydumper is a highly customizable data dumping tool For a complete list of supported parameters and their default values use the builtin help mydumper help The example dump is executed as follows [ec2user@clientA ~]$ mydumper h 11223344 u root \ p pAssw0rd B myschema t 4 o myschema_dump/ The operation results in the following files being created in the dump directory [ec2user@clientA ~]$ ls sh1 myschema_dum p/ total 733M 40K metadata 40K myschema schemacreatesql 40K myschemat1 schemasql 184M myschemat1sql 40K myschemat2 schemasql 184M myschemat2sql 40K myschemat3 schemasql 184M myschemat3sql 40K myschemat4 schemasql 184M myschemat4sql The directory contains a collection of metadata files in addition to schema and data dumps You don’t have to manipulate these files directly It’s enough that the directory structure is understood by the myloader tool Compress the entire directory and transfer it to client instance B [ec2user@clientA ~]$ tar czf myschema_dumptargz myschema_dump [ec2user@clientA ~]$ scp i sshkeypem myschema_dumptargz \ <clientB_ssh_user>@<clientB_address>:/home/ec2 user/ This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 45 When the transfer is complete connect to client instance B and verify that the myloader utility is available [ec2user@clientB ~]$ myloader V myloader 091 built against MySQL 5631 Now you can u npack the dump and import it The syntax used for the myloader command is very similar to what you already used for mydumper The only difference is the d (source directory) parameter replacing the o (target directory) parameter [ec2user@clientB ~]$ tar zxf myschema_dumptargz [ec2user@clientB ~]$ myloader h <cluster_dns_endpoint> \ u master p pAssw0rd B myschema t 4 d myschema_dump/ Useful Tips • The concurrency level (thread count) does not have to be the same for export and import operations A good rule of thumb is to use one thread per server CPU core (for dumps) and one thread per two CPU cores (for imports) • The schema and data dumps produced by mydumper use an SQL format and are compatible with MySQL 56 Although you will typically use the pair of mydumper and myloader tools together for best results technically you can import the dump files from myloader by using any other MySQL compatible client tool You can find more tips and best practices for self managed migrations in t he AWS whitepaper Best Practices for Migrating MySQL Databases to Amazon Aurora Heterogeneous Migrations For detailed step bystep instructions on how to migrate schema and data from a non MySQL compatib le database into an Aurora DB cluster using AWS SCT and AWS DMS see the AWS whitepaper Migrating Your Databases to Amazon Aurora Prior to running migration we suggest you to review Proof of Concept with Aurora to This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 46 understand the volume of data and representative of your production environment as a blueprint Testing and Cutover Once the schema and data have been successfully migrated from the source database to Amazon Aurora you are no w ready to perform end toend testing of your migration process The testing approach should be refined after each test migration and the final migration plan should include a test plan that ensures adequate testing of the migrated database Migration T esting Test Category Purpose Basic acceptance tests These pre cutover tests should be automatically executed upon completion of the data migration process Their primary purpose is to verify whether the data migration was successful Following are some common outputs from these tests: • Total number of items processed • Total number of items imported • Total number of items skipped • Total number of warnings • Total number of errors If any of these totals reported by the tests deviate from the expec ted values then it means the migration was not successful and the issues need to be resolved before moving to the next step in the process or the next round of testing This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 47 Test Category Purpose Functional tests These post cutover tests exercise the functionality of the applicat ion(s) using Aurora for data storage They include a combination of automated and manual tests The primary purpose of the functional tests is to identify problems in the application caused by the migration of the data to Aurora Nonfunctional tests Thes e post cutover tests assess the nonfunctional characteristics of the application such as performance under varying levels of load User acceptance tests These post cutover tests should be executed by the end users of the application once the final data migration and cutover is complete The purpose of these tests is for the end users to decide if the application is sufficiently usable to meet its primary function in the organization Cutover Once you have completed the final migration and testing it is time to point your application to the Amazon Aurora database This phase of migration is known as cutover If the planning and testing phase has been executed properly cutover should not lead to unexpected issues Precutover Actions • Choose a cutover window: Identify a block of time when you can accomplish cutover to the new database with minimum disruption to the business Normally you would select a low activity period for the database (typically nights and/or weekends) This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 48 • Make sure changes are caught up: If a near zero downtime migration approach was used to replicate database changes from the source to the target database make sure that all database changes are caught up and your target database is not significantly lagging behind the sour ce database • Prepare scripts to make the application configuration changes: In order to accomplish the cutover you need to modify database connection details in your application configuration files Large and complex applications may require updates to co nnection details in multiple places Make sure you have the necessary scripts ready to update the connection configuration quickly and reliably • Stop the application: Stop the application processes on the source database and put the source database in read only mode so that no further writes can be made to the source database If the source database changes aren’t fully caught up with the target database wait for some time while these changes are fully propagated to the target database • Execute pre cutove r tests: Run automated pre cutover tests to make sure that the data migration was successful Cutover • Execute cutover: If pre cutover checks were completed successfully you can now point your application to Amazon Aurora Execute scripts created in the p re cutover phase to change the application configuration to point to the new Aurora database • Start your application: At this point you may start your application If you have an ability to stop users from accessing the application while the application is running exercise that option until you have executed your post cutover checks Post cutover Checks • Execute post cutover tests: Execute predefined automated or manual test cases to make sure your application works as expected with the new database It ’s a good strategy to start testing read only functionality of the database first before executing tests that write to the database This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 49 Enable user access and closely monitor: If your test cases were executed successfully you may give user access to the app lication to complete the migration process Both application and database should be closely monitored at this time Troubleshooting The following sections provide examples of common issues and error messages to help you troubleshoot heterogenous DMS migrat ions Troubleshooting MyS QL Specific Issues The following issues are specific to using AWS DMS with MySQL databases Topics • CDC Task Failing for Amazon RDS DB Instance Endpoint Because Binary Logging Disabled • Connections to a target MySQL instance are disconnected during a task • Adding Autocommit to a MySQL compatible Endpoint • Disable Foreign Keys on a Target MySQL compatible Endpoint • Characters Replaced with Question Mark • "Bad event" Log Entries • Change Data Capture with MySQL 55 • Increasing Binary Log Retention for Amazon RDS DB Instances • Log Message: Some changes from the source database had no impact when applied to the target database • Error: Identifier too long • Error: Unsupported Character Set Causes Field Data Conversion to Fail • Error: Codepage 1252 to UTF8 [120112] A field data conversion failed This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 50 CDC Task Failing for Amazon RDS DB Instance E ndpoint Because Binary Logging Disabled This issue occurs with Amazon RDS DB instances because automated backups are disabled Enable automatic backups by setting the backup retention period to a non zero value Connections to a target MySQL instance are disconnected during a task If you have a task with LOBs that is getting disconnected from a MySQL target with the following type of errors in the task log you might need to adjust some of your task settings [TARGET_LOAD ]E: RetCode: SQL_ ERROR SqlState : 08S01 NativeError: 2013 Message: [ MySQL][ODBC 53(w) Driver ][mysqld5716log]Lost connection to MySQL server during query [122502] ODBC general error To solve the issue where a task is being disconnected from a MySQL target do the following: • Check that you have your database variable max_allowed_packet set large enough to hold your largest LOB • Check that you have the following variables set to have a large timeout value We suggest you use a value of at least 5 minutes for each of these variables o net_read_timeout o net_write_timeout o wait_timeout o interactive_timeout Adding Autocommit to a MySQL compatible Endpoint To add autocommit to a target MySQL compatible endpoint use the following procedure: This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 51 1 Sign in to the AWS Management Console and sel ect DMS 2 Select Endpoints 3 Select the MySQL compatible target endpoint that you want to add autocommit to 4 Select Modify 5 Select Advanced and then add the following code to the Extra connection attributes text box: Initstmt = SET AUTOCOMMIT= 1 6 Choose Modify Disable Foreign Keys on a Target MySQL compatible Endpoint You can disable foreign key checks on MySQL by adding the following to the Extra Connection Attributes in the Advanced section of the target MySQL Am azon Aurora with MySQL compatibility or MariaDB endpoint To disable foreign keys on a target MySQL compatible endpoint use the following procedure: 1 Sign in to the AWS Management Console and select DMS 2 Select Endpoints 3 Select the MySQL Aurora MySQL or MariaDB target endpoint that you want to disable foreign keys 4 Select Modify 5 Select Advanced and then add the following code to the Extra connection attributes text box: Initstmt =SET FOREIGN_KEY_CHECKS= 0 This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 52 6 Choose Modify Characters Replaced with Question Mark The most common situation that causes this issue is when the source endpoint characters have been encoded by a character set that AWS DMS doesn't support For example AWS DMS engine versions prior to version 311 do n't support the UTF8MB4 character set Bad event Log Entries Bad event entries in the migration logs usually indicate that an unsupported DDL operation was attempted on the source database endpoint Unsupported DDL operations cause an event that the repli cation instance cannot skip so a bad event is logged To fix this issue restart the task from the beginning which will reload the tables and will start capturing changes at a point after the unsupported DDL operation was issued Change Data Capture with MySQL 55 AWS DMS change data capture (CDC) for Amazon RDS MySQL compatible databases requires full image row based binary logging which is not supported in MySQL version 55 or lower To use AWS DMS CDC you must up upgrade your Amazon RDS DB instance t o MySQL version 56 Increasing Binary Log Retention for Amazon RDS DB Instances AWS DMS requires the retention of binary log files for change data capture To increase log retention on an Amazon RDS DB instance use the following procedure The following example increases the binary log retention to 24 hours call mysqlrds_set_confi guration( 'binlog retention hours' 24); This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 53 Log Message: Some changes from the source database had no impact when applied to the target database When AWS DMS updates a MySQL database column’s value to its existing value a message of zero rows a ffected is returned from MySQL This behavior is unlike other database engines such as Oracle and SQL Server that perform an update of one row even when the replacing value is the same as the current one Error: Identifier too long The following error oc curs when an identifier is too long: TARGET_LOAD E: RetCode: SQL_ERROR SqlState: HY000 NativeError: 1059 Message: MySQLhttp://ODBC 53(w) Driverhttp://mysqld 5610Identifier name '<name>' is too long 122502 ODBC general error (ar_odbc_stmtc: 4054) When AWS DMS is set to create the tables and primary keys in the target database it currently does not use the same names for the Primary Keys that were used in the source database Instead AWS DMS creates the Primary Key na me based on the tables name When the table name is long the auto generated identifier created can be longer than the allowed limits for MySQL The solve this issue currently pre create the tables and Primary Keys in the target database and use a task w ith the task setting Target table preparation mode set to Do nothing or Truncate to populate the target tables Error: Unsupported Character Set Causes Field Data Conversion to Fail The following error occurs when an unsupported character set causes a fi eld data conversion to fail: "[SOURCE_CAPTURE ]E: Column '<column name>' uses an unsupported character set [120112] A field data conversion failed (mysql_endpoint_capturec: 2154) This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 54 This error often occurs because of tables or databases using U TF8MB4 encoding AWS DMS engine versions prior to 311 don't support the UTF8MB4 character set In addition check your database's parameters related to connections The following command can be used to see these parameters: SHOW VARIABLES LIKE '%char%' ; Error: Codepage 1252 to UTF8 [120112] A field data conversion failed The following error can occur during a migration if you have non codepage 1252 characters in the source MySQL database [SOURCE_CAPTURE ]E: Error converting column 'column_xyz' in tabl e 'table_xyz with codepage 1252 to UTF8 [120112] A field data conversion failed (mysql_endpoint_capturec: 2248) As a workaround you can use the CharsetMapping extra connection attribute with your source MySQL endpoint to specify character set mapping You might need to restart the AWS DMS migration task from the beginning if you add this extra connection attribute For example the following extra connection a ttribute could be used for a MySQL source endpoint where the source character set is utf8 or latin1 65001 is the UTF8 code page identifier CharsetMapping =utf865001 CharsetMapping =latin165001 Conclusion Amazon Aurora is a high performance highly available and enterprise grade database built for the cloud Leveraging Amazon Aurora can result in better performance and greater availability than other open source databases and lower costs than most commercial grade databases This paper proposes stra tegies for identifying the best method to migrate databases to Amazon Aurora and details the procedures for planning This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 55 and executing those migrations In particular AWS Database Migration Service (AWS DMS) as well as the AWS Schema Conversion Tool are the r ecommended tools for heterogeneous migration scenarios These powerful tools can greatly reduce the cost and complexity of database migrations Multiple factors contribute to a successful database migration: • The choice of the database product • A migration approach (eg methods tools) that meets performance and uptime requirements • Welldefined migration procedures that enable database administrators to prepare test and complete all migration steps with confidence • The ability to identify diagnose and deal with issues with little or no interruption to the migration process We hope that the guidance provided in this document will help you introduce meaningful improvements in all of these areas and that it will ultimately contribute to creating a bette r overall experience for your database migrations into Amazon Aurora Contributors Contributors to this document include : • Bala Mugunthan Sr Partner Solution Architect Amazon Web Services • Ashar Abbas Database Specialty Architect • Sijie Han SA Manager A mazon Web Services • Szymon Komendera Database Engineer Amazon Web Services This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 56 Further Reading For additional information see: • Aurora on Amazon RDS User Guide • Migrating Your Databases t o Amazon Aurora AWS whitepaper • Best Practices for Migrating MySQL Databases to Amazon Aurora AWS whitepaper Document Revisions Date Description July 2020 Added information for the large databases migrations on Amazon Aurora and functional p artition and data shard consolidation strategies are discussed in homogenous migration s ection s Multi threaded migration using mydumper and myload er open source tools are introduced Overall basic acceptance testing functional test non functional test and user acceptance tests are explained in the testing phase and pre cutover and post cut overs phase scenarios are further explained September 2019 First publication
General
Strategies_for_Migrating_Oracle_Databases_to_AWS
This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoawshtmlStrategies for Migrating Oracle Databases to AWS First Published December 2014 Updated January 27 202 2 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws html iii Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws html iv Contents Introduction 7 Data migration strategies 7 Onestep migration 8 Twostep migration 8 Minimal downtime migration 9 Nearly continuous data replication 9 Tools used for Oracle Database migration 9 Creating a database on Amazon RDS Amazon EC2 or VMware Cloud on AWS 10 Amazon RDS 11 Amazon EC2 11 Data migration methods 12 Migrating data for small Oracle databases 13 Oracle SQL Developer database copy 14 Oracle materialized views 15 Oracle S QL*Loader 17 Oracle Export and Import utilities 21 Migrating data for large Oracle databases 22 Data migration using Oracle Data Pump 23 Data migration using Oracle external tables 34 Data migration using Oracle RMAN 35 Data replication using AWS Database Migration Service 37 Data replication using Oracle GoldenGate 38 Setting up Oracle GoldenGate Hub on Amazon EC2 41 Setting up the source database for use with Oracle GoldenGate 43 Setting up the destination database for use with Oracle GoldenGate 43 Working with the Extract and Replicat utilities of Oracle GoldenGate 44 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws html v Running the Extract process of Oracle GoldenGate 44 Transferring files to AWS 47 AWS DataSync 47 AWS Storage Gateway 47 Amazon RDS integration with S3 48 Tsunami UDP 48 AWS Snow Family 48 Conclusion 49 Contributors 49 Further reading 49 Document versions 50 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws html vi Abstract Amazon Web Services (AWS) provides a comprehensive set of services and tools for deploying enterprise grade solutions in a rapid reliable and cost effective manner Oracle Database is a widely used relational database management system that is deployed in enterprises of all sizes It manage s various forms of data in many phases of business transactions This whitepaper de scribe s the preferred methods for migrating an Oracle Database to AWS and helps you choose the method that is best for your business This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 7 Introduction This whitepaper presents best practices and methods fo r migrating Oracle Database from servers that are on premises or in your data center to AWS Data unlike application binaries cannot be recreated or reinstalled so you should carefully plan your data migr ation and base it on proven best practices AWS offers its customers the flexibility of running Oracle Database on Amazon Relational Database Service (Amazon RDS) the managed database service in the cloud as we ll as Amazon Elastic Compute Cloud (Amazon EC2): • Amazon RDS makes it simple to set up operate and scale a relational database in the cloud It provides cost efficient resizable capacity for an open standard relational database and manages common database administration tasks • Amazon EC2 provides scalable computing ca pacity in the cloud Using Amazon EC2 removes the need to invest in hardware up front so you can develop and deploy applications faster You can use Amazon EC2 to launch as many or as few virtual servers as you need configure security and networking and manage storage Running the database on Amazon EC2 is very similar to running the database on your own servers Depending on whether you choose to run your Oracle Database on Amazon EC2 or Amazon RDS the process for data migration can differ For example users don’t have OSlevel access in Amazon RDS instances It ’s important to understand the different possible strategies so you can choose the one that best fits your need s Data migration strategies The migration strategy you choose depends on several factors: • The size of the database • Network connectivity between the source server and AWS • The version and edition of your Oracle Database software • The database options tools and utilities that are available • The amount of time that is available for migration This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 8 • Whether the migration and switchover to AWS will be done in one step or a sequence of steps over time The following sections describe some common migration strategies Onestep migration Onestep migration is a good option for small databases tha t can be shut down for 24 to 72 hours During the shut down period all the data from the source database is extracted and the extracted data is migrated to the destination database in AWS The destination database in AWS is tested and validated for data consistency with the source Once all validations have passed the database is switched over to AWS Twostep migration Twostep migration is a commonly used method because it requires only minimal downtime and can be used for databases of any size: 1 The da ta is extracted from the source database at a point in time (preferably during nonpeak usage) and migrated while the database is still up and running Because there is no downtime at this point the migration window can be sufficiently large After you co mplete the data migration you can validate the data in the destination database for consistency with the source and test the destination database on AWS for performance connectivity to the applications and any other criteria as needed 2 Data changed in the source database after the initial data migration is propagated to the destination before switchover This step synchronizes the source and destination databases This should be scheduled for a time when the database can be shut down (usually over a few hours late at night on a weekend) During this process there won’t be any more changes to the source database because it will be unavailable to the applications Normally the amount of data that is changed after the first step is small compar ed to the total size of the database so this step will be quick and requires only minimal downtime After all the changed data is migrated you can validate the data in the destination database perform necessary tests and if all tests are passed switc h over to the database in AWS This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 9 Minimal downtime migration Some business situations require database migration with little to no downtime This requires detailed planning and the necessary data replication tools for proper completion These migration method ologies typically involve two components: an initial bulk extract/load followed by the application of any changes that occurred during the time the bulk step took to run After the changes have applied you should validate the migrated data and conduct an y necessary testing The replication process synchronizes the destination database with the source database and continues to replicate all data changes at the source to the destination Synchronous replication can have an effect on the performance of the source database so if a few minutes of downtime for the database is acceptable then you should set up asynchronous replication instead You can switch over to the database in AWS at any time because the source and destination databases will always be in sync There are a number of tools available to help with minimal downtime migration The AWS Database Migration Service (AWS DMS) supports a range of database engines including Oracle running on premise s in EC 2 or on RDS Oracle GoldenGate is another option for real time data replication There are also third party tools available to do the replication Nearly c ontinuous data replication You can us e nearly continuous data replication if the destination database in AWS is used as a clone for reporting and business intelligence (BI) or for disaster recovery (DR) purposes In this case the process is exactly the same as minimal downtime migration ex cept that there is no switchover and the replication never stops Tools used for Oracle Database migration A number of tools and technologies are available for data migration You can use some of these tools interchangeably or you can use other third party tools or open source tools available in the market This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 10 • AWS DMS helps you move databases to and from AWS easily and securely It supports most commercial and open source databases and facilitates both homogeneous and heterogeneous migrations AWS DMS offers change data capture technology to keep databases in sync and minimize downtime during a migration It is a manag ed service with no client install required • Oracle Recovery Manager (RMAN) is a tool available from Oracle for performing and managing Oracle Database backups and rest orations RMAN allows full hot or cold backups plus incremental backups RMAN maintains a catalogue of the backups making the restoration process simple and dependable RMAN can also duplicate or clone a database from a backup or from an active database • Oracle Data Pump Export is a versatile utility for exporting and importing data and metadata from or to Oracle databases You can perform Data Pump export/ import on an entire database selective schemas table spaces or database objects Data Pump export/ import also has powerful data filtering capabilities for selective export or import of data • Oracle GoldenGate is a tool for replicating data between a source and one or more destination databases You can use it to build high availability architectures You can also use it to perform real time data integration transactional change data capture and replication in heterogeneous IT environments • Oracle SQL Developer is a no cost GUI tool available from Oracle for data manipulation development an d management This Java based tool is available for Microsoft Windows Linux or iOS X • Oracle SQL*Loader is a bulk data load utility available from Oracle for loading data from external files into a database SQL*Loader is included as part of the full database client installation Creating a database on Amazon RDS Amazon EC2 or VMware Cloud on AWS To migrate your data to AWS you need a source database (either onpremises or in a data center) and a destination database in AWS Based on your business needs you can choose between using Amazon RDS for Oracle or installing and managing the This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 11 database on your own in Amazon EC2 instance To help you choose the servic e that ’s best for your business see the following sections Amazon RDS Many customers prefer Amazon RDS for Oracle because it frees them to focus on application development Amazon RDS automates time consuming database administration tasks including prov isioning backups software patching monitoring and hardware scaling Amazon RDS simplifies the task of running a database by eliminating the need to plan and provision the infrastructure as well as install configure and maintain the database software Amazon RDS for Oracle makes it easy to use replication to enhance availability and reliability for production workloads By using the Multi Availability Zone (AZ) deployment option you can run mission critical workloads with high availability and built in automated failover from your primary database to a synchronously replicated secondary database As with all AWS services no upfront investments are required and you pay only for the resources you use For more information see Amazon RDS for Oracle To use Amazon RDS log in to your AWS account and start an Amazon RDS Oracle instance from the AWS Management Console A good strategy is to treat this as an interim migration database from which the final database will be created Do not enable the Multi AZ feature until the data migration is completely done because replication for Multi AZ will hinder data migration performance Be sure to give the instance enough space to store the import data files Typically this requires you to provision twice as much capacity as the size of the database Amazon EC2 Alternatively you can run an Oracle database directly on Amazon EC2 which gives you full control over se tup of the entire infrastructure and database environment This option provides a familiar approach but also requires you to set up configure manage and tune all the components such as Amazon EC2 instances networking storage volumes scalability and security as needed (based on AWS architecture best practices) For more information see the Advanced Architectures for Oracle Database on Amazon EC 2 whitepaper for guidance about the appropriate architecture to choose and for installation and configuration instructions This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 12 VMware Cloud on AWS VMware Cloud on AWS is the preferred service for AWS for all vSphere based workloads VMware Cloud on AWS brings the VMware software designed data center (SDDC ) software to the AWS Cloud with op timized access to native AWS services If your Oracle workload runs on VMware on premises you can easily migrate the Oracle workloads to the AWS C loud using VMware Cloud on AWS VMware Cloud on AWS has the capability to run Oracle Real Application Clusters (RAC) workloads It allows multi cast protocols and provides shared storage capability across VMs running in VMware Cloud on AWS SDDC VMware provides native migration capabiliti es such as VMware VMotion and VMware HCX to move virtual machines ( VMs) from on premises to the VMware Cloud on AWS Depending on Orac le workload performance patterns service level agreement ( SLA) and the bandwidth availability you can choose to migrate the VM either live or using cold migration methods Data migration methods The remainder of this whitepaper provides details about ea ch method for migrating data from Oracle Database to AWS Before you get to the details you can scan the following table for a quick summary of each method Each method depends upon business recovery point objective (RPO) recovery time objective (RTO) a nd overall availability SLA Migration administrators must evaluate and map these business agreements with the appropriate methods Choose the method depending upon your application SLA RTO RPO tool and license availability Table 1 – Migration methods and tools Data migration method Database size Works for: Recommended for: AWS Database Migration Service Any size Amazon RDS Amazon EC2 Minimal downtime migration Database size limited by internet bandwidth Oracle SQL Developer Database c opy Up to 200 MB Amazon RDS Amazon EC2 Small databases with any number of objects This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 13 Data migration method Database size Works for: Recommended for: Oracle Materialized Views Up to 500 MB Amazon RDS Amazon EC2 Small databases with limited number of objects Oracle SQL*Loader Up to 10 GB Amazon RDS Amazon EC2 Small to medium size databases with limited number of objects Oracle Export and Import Oracle Utilities Up to 10 GB Amazon RDS Amazon EC2 Small to medium size databases with large number of objects Oracle Data Pump Up to 5 TB Amazon RDS Amazon EC2 VMware Cloud on AWS Preferred method for any database from 10 GB to 5 TB External tables Up to 1 TB Amazon RDS Amazon EC2 VMware Cloud on AWS Scenarios where this is the standard method in use Oracle RMAN Any size Amazon EC2 VMware Cloud on AWS Databases over 5 TB or if database backup is already in Amazon Simple Storage Service (Amazon S3) Oracle GoldenGate Any size Amazon RDS Amazon EC2 VMware Cloud on AWS Minimal downtime migration Migrating data for small Oracle databases You should base your strategy for data migration on the database size reliability and bandwidth of your network connection to AWS and the amount of time available for migration Many Oracle databases tend to be medium to large in size ranging anywhere from 10 GB to 5 TB with some as large as 20 TB or more However you also might need to migrate smaller databases This is especially true for phased migrations This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Orac le Databases to AWS 14 where the databases are broken up by schema making each migration effort small in size If the source database is under 10 GB and if you have a reli able high speed internet connection you can use one of the following methods for your data migration All the methods discussed in this section work with Amazon RDS Oracle or Oracle Database running on Amazon EC2 Note : The 10 GB size is just a guideline; you can use the same methods for larger databases as well The migration time varies based on the data size and the network throughput However if your database size exceeds 50 GB you should use one of the methods listed in the Migrating data for large Oracle databases section in this whitepaper Oracle SQL Developer database copy If the total size of the data you are migrating is under 200 MB the simplest solution is to use the Oracle SQL Developer Database Copy function Oracle SQL Developer is a no cost GUI tool available from Oracle for data manipulation development and management This easy touse Java based tool is available for Microsoft Windows Linux or Mac OS X With this method data transfer from a source database to a destination database is done directly without any intermediary steps Because SQL Developer can handle a large number of ob jects it can comfortably migrate small databases even if the database contains numerous objects You will need a reliable network connection between the source database and the destination database to use this method Keep in mind that this method does not encrypt data during transfer To migrate a database using the Oracle SQL Developer Database Copy function perform the following steps: 1 Install Oracle SQL Developer 2 Connect to your source and destination databases 3 From the Tools menu of Oracle SQL Developer choose the Database Copy command to copy your data to your Amazon RDS or Amazon EC2 instance 4 Follow the steps in the Database Copy Wizard You can choose the objects you want to migrate and use filters to limit the data This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 15 The following screenshot shows the Database Copy Wizard The Database Copy Wizard in the Oracle SQL Developer guides you through your data transfer Oracle materialized views You can use Oracle Database materialized views to migrate data to Oracle databases on AWS for either Amazon RDS or Amazon EC2 This method is well suited for databases under 500 MB Because materialized views are available only in Oracle Database Enterprise Edition this method works only if Oracle Database Enterprise Edition is used for both the source database and the destination database With materialized view replication you can do a onetime migration of data to AWS while keeping th e destination tables continuously in sync with the source The result is a minimal downtime cut over Replication occurs over a database link between the source and destination databases For the initial load you must do a full refresh so that all the dat a in the source tables gets moved to the destination tables This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 16 Important : Because the data is transferred over a database link the source and destination databases must be able to connect to each other over SQL*Net If your network security design doesn’t a llow such a connection then you cannot use this meth od Unlike the preceding method (the Oracle SQL Developer Database Copy function) in which you copy an entire database for this method you must create a materialized view for each table that you want to migrate This gives you the flexibility of selectively moving tables to the database in AWS However it also makes the process more cumbersome if you need to migrate a large number of tables For this reason this method is better suited for migra ting a limited number of tables For best results with this method complete the following steps Assume the source database user ID is SourceUser with password PASS : 1 Create a new user in the Amazon RDS or Amazon EC2 database with sufficient privileges Create user MV_DBLink_AWSUser identified by password 2 Create a database link to the source database CREATE DATABASE LINK SourceDB_lnk CONNECT TO SourceUser IDENTIFIED BY PASS USING '(description=(address=(protocol=tcp) (host= crmdbacmecorpcom) (port=1521 )) (connect_data=(sid=ORCLCRM)))’ 3 Test the database link to make sure you can access the tables in the source database from the database in AWS through the database link Select * from tab@ SourceDB_lnk 4 Log in to the source database and create a materializ ed view log for each table that you want to migrate CREATE MATERIALIZED VIEW LOG ON customers This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 17 5 In the destination database in AWS create materialized views for each table for which you set up a materialized view log in the source database CREATE MATERIALIZED VIEW customer BUILD IMMEDIATE REFRESH FAST AS SELECT * FROM customer@ SourceDB_lnk Oracle SQL*Loader Oracle SQL*Loader is well suited for small to moderate databases under 10 GB that contain a limited number of objects Because the process inv olved in exporting from a source database and loading to a destination database is specific to a schema you should use this process for one schema at a time If the database contains multiple schemas you need to repeat the process for each schema This m ethod can be a good choice even if the total database size is large because you can do the import in multiple phases (one schema at a time) You can use this method for Oracle Database on either Amazon RDS or Amazon EC2 and you can choose between the fol lowing two options: Option 1 1 Extract data from the source database such as into flat files with column and row delimiters 2 Create tables in the destination database exactly like the source (use a generated script) 3 Using SQL*Loader connect to the destina tion database from the source machine and import the data Option 2 1 Extract data from the source database such as into flat files with column and row delimiters 2 Compress and encrypt the files 3 Launch an Amazon EC2 instance and install the full Oracle client on it (for SQL*Loader) For the database on Amazon EC2 this c an be the same instance where the destination database is located For Amazon RDS this is a temporary instance 4 Transport the files to the Amazon EC2 instance This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 18 5 Decompress and unen crypt files in the Amazon EC2 instance 6 Create tables in the destination database exactly like the source (use a generated script) 7 Using SQL*Loader connect to the destination database from the temporary Amazon EC2 instance and import the data Use the first option if your database size is small if you have direct SQL*Net access to the destination database in AWS and if data security is not a concern Otherwise use the second option because you can use encryption and compression during the transporta tion phase Compression substantially reduces the size of the files making data transportation much faster You can use either SQL*Plus or SQL Developer to perform data extraction which is the first step in both options For SQL*Plus use a query in a SQL script file and send the output directly to a text file as shown in the follo wing example: set pagesize 0 set head off set feed off set line 200 SELECT col1|| '|' ||col2|| '|' ||col3|| '|' ||col4|| '|' ||col5 from SCHEMATABLE; exit; To create encrypted and compressed output in the second option (see step 2 of the preceding Option 2 procedure) you can directly pipe the output to a zip utility You can also extract data by using Oracle SQL Developer: 1 In the Connections pane select the tables you want to extract data from 2 From the Tools menu choose the Database Export command as shown in the following screenshot This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 19 Database export command 3 On the Source/Destination page of the Export Wizard (see the next screenshot) select the Export DDL option to generate the script for creating the table which will simplify the entire process 4 In the Format dropdown on the same page choose loader 5 In the Save As box on the same page choose Separate Files This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 20 Export Wizard options on the Source/Destination page Continue to follow the Export Wizard steps to complete the export The Export Wizard helps you create the data file control file and table creation script in one step for multiple tables in a schema making it easier than using Oracle SQL*Plus to do the same tasks If you use Option 1 as specified you can run Oracle SQL*Loader from the source environment using the extracted data and control files to import data into the destination database To do this use the following command: sqlldr userid=userID/password@$service control=controlctl log=loadlo g bad=loadbad discard=loaddsc data=loaddat direct=y skip_index_maintenance=true errors=0 If you use Option 2 then you need an Amazon EC2 instance with the full Oracle client installed Additionally you need to upload the data files to that Amazon EC2 instance This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 21 For the database on Amazon EC2 this could be the same Amazon EC2 instance where the destination database is located For Amazon RDS this will be a temporary Amazon EC2 instance Before you do the upload we recommend that you compress and encry pt your files To do this you can use a combination of TAR and ZIP/GZIP in Linux or a third party utility such as WinZip or 7 Zip After the Amazon EC2 instance is up and running and the files are compress ed and encrypted upload the files to the Amazon EC2 instance using Secure File Transfer Protocol (SFTP) From the Amazon EC2 instance connect to the destination database using Oracle SQL*Plus to ensure you can establish the connection Run the sqlldr command shown in the preceding example for each control file that you have from the extract You can also cre ate a shell/bat script that will run sqlldr for all control files one after the other Note : Enabling skip_index_maintenance=true significantly increase s dataload performance However table indexes are not updated so you will need to rebuild all indexes after the data load is complete Oracle Export and Import utilities Despite being replaced by Oracle Data Pump the original Oracle Export and Import utilities are useful for migrations of databases with si zes less than 10 GB where the data lacks binary float and double data types The import process creates the schema objects so you do not need to run a script to create them beforehand This makes the process well suited for databases with a large number o f small tables You can use this method for Amazon RDS for Oracle and Oracle Database on Amazon EC2 The first step is to export the tables from the source database by using the following command Substitute the user name and password as appropriate: exp userID/password@$service FILE=exp_filedmp LOG=exp_filelog The export process creates a binary dump file that contains both the schema and data for the specified tables You can import the schema and data into a destination database Choose one of the foll owing two options for the next steps: This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Orac le Databases to AWS 22 Option 1 1 Export data from the source database into a binary dump file using exp 2 Import the data into the destination database by running imp directly from the source server Option 2 1 Export data from the source database into a binary dump file using exp 2 Compress and encrypt the files 3 Launch an Amazon EC2 instance and install the full Oracle client on it (for the emp/imp utility) For the database on Amazon EC2 this could be the same instance where the destination database is located For Amazon RDS this will be a temporary instance 4 Transport the files to the Amazon EC2 instance 5 Decompress and unencrypt the files in the Amazon EC2 instance 6 Import the data into the destination database by running imp If your database size is larger than a gigabyte use Option 2 because it includes compression and encryption This method will also have better import performance For both Option 1 and Option 2 use the following command to import into the destination d atabase: imp userID/password@$service FROMUSER=cust_schema TOUSER=cust_schema FILE=exp_filedmp LOG=imp_filelog There are many optional arguments that can be passed to the exp and imp commands based on your needs For details see the Oracle documentation Migrating data for large Oracle databases For larger databases use one of the methods described in this section rather than one of the methods described in Migrating Data for small Oracle Databases For the purpose of this whitepaper define a large database as any database 10 GB or more This section describes three methods for migrating large databases: This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 23 • Data m igration using Oracle Data Pump – Oracle Data Pump is an excellent tool for migrating large amounts of data and it can be used with databases on either Amazon RDS or Amazon EC2 • Data m igration using Oracle external tables – The process involved in data migration using Oracle external tables is very similar to that of Oracle Data Pump Use this method if you already have processes built around it; otherwise it is better to use the Oracle Data Pump method • Data m igration using Oracle RMAN – Migration using RMAN can be useful if you are already backing up the database to AWS or using the AWS Import/Export service to bring the data to AWS Oracle RMAN can be used only for databases on Amazon EC2 not Amazon RDS Data migration using Oracle Da ta Pump When the size of the data to be migrated exceeds 10 GB Oracle Data Pump is probably the best tool to use for migrating data to AWS This method allows flexible data extraction options a high degree of parallelism and scalable operations which enables highspeed movement of data and metadata from one database to another Oracle Data Pump is introduced with Oracle 10 g as a replacement for the original Import/Export tools It is available only on Oracle Database 10 g Release 1 or later You can use the Oracle Data Pump method for both Amazon RDS for Oracle and Oracle Database running on Amazon EC2 The process involved is similar for both although Amazon RDS for Oracle requires a few additional steps Unlike the original Import/Export utilities the Oracle Data Pump import requires the data files to be available in the database server instance to import them into the database You cannot access the file system in the Amazon RDS instance directly so you need to use one or more Amazon EC2 instances (bridge instances) to transfer files from the source to the Amazon RDS instance and then import that into the Amazon RDS database You need these temporary Amazon EC2 bridge instances only for the duration of the import; you can end the instance s soon after the import is done Use Amazon Linux based instances for this purpose You do not need an Oracle Database installation for an Amazon EC2 bridge instance; you only need to install the Oracle Instance Client This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 24 Note : To use this method your Amazo n RDS database must be version 11203 or later The f ollowing is the overall process for data migration using Oracle Data Pump for Oracle Database on Oracle for Amazon EC2 and Amazon RDS Migrating data to a database in Amazon EC2 1 Use Oracle Data Pump to export data from the source database as multiple compressed and encrypted files 2 Use Tsunami UDP to move the files to an Amazon EC2 instance running the destination Oracle database in AWS 3 Import that data into the destination database using the Oracle Data Pump import feature Migrating data to a database in Amazon RDS 1 Use Oracle Data Pump to export data from the source database as multiple files 2 Use Tsunami UDP to move the files to Amazon EC2 bridge instances in AWS 3 Using the provided Perl script that makes use of the UTL_FILE package move the data files to the Amazon RDS instance 4 Import the data into the Amazon RDS database using a PL/SQL script that utilizes the DBMS_DATAPUMP package (an example is provided at the end of this section) Using Oracle Data Pump to export data on the source instance When you export data from a large database you should run multiple threads in parallel and specify a size for each file This speeds up the export and also makes files available quickly for the next step of the process There is no need to wait for the entire database to be exported before moving to the next step As each file completes it can be moved to the next step You can enable compre ssion by using the parameter COMPRESSION=ALL which substantially reduces the size of the extract files You can encrypt files by providing a password or by using an Oracle wallet and specifying the parameter ENCRYPTION= all To learn more about the compr ession and encryption options see the Oracle Data Pump documentation This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 25 The following example shows the export of a 500 GB database running eight threads in parallel with each output file up to a maximum of 20 GB This creates 22 files totaling 175 GB The total file size is significantly smaller than the actual source database size because of the compression option of Oracle Data Pump: expdp demoreinv/demo f ull=y dumpfile=data_pump_exp1:reinvexp1%Udmp data_pump_exp2:reinvexp2%Udmp data_pump_exp3:reinvexp3%Udmp filesize=20G parallel=8 logfile=data_pump_exp1:reinvexpdplog compression=all ENCRYPTION= all ENCRYPTION_PASSWORD=encryption_password job_name=r eInvExp Using Oracle Data Pump to export data from the source database instance Spreading the output files across different disks enhances input/output ( I/O) performance In the following examples three different disks are used to avoid I/O contention This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 26 Parallel run in multiple threads writing to three different disks Dump files generated in each disk This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 27 The most time consuming part of this entire process is the file transportation to AWS so optimizing the file transport significantly reduces the time required for the data migration The following steps show how to optimize the file transport: 1 Compress the dump files during the export 2 Serialize th e file transport in parallel Serialization here means sending the files one after the other; you don’t need to wait for the export to finish before uploading the files to AWS Uploading many of these files in parallel (if enough bandwidth is available) fu rther improves the performance We recommend that you parallel upload as many files as there are disks being used and use the same number of Amazon EC2 bridge instances to receive those files in AWS 3 Use Tsunami UDP or a commercial wide area network ( WAN ) accelerator to upload the data files to the Amazon EC2 instances Using Tsunami to upload files to Amazon EC2 The following example shows how to install Tsunami on both the source database server and the Amazon EC2 instance: yum y install make yum y install automake yum y install gcc yum y install autoconf yum y install cvs wget http://sourceforgenet/projects/tsunami udp/files/late st/download?_test=goal tar xzf tsunami*gz cd tsunamiudp* /recompilesh make install After you’ve installed Tsunami open port 46224 to enable Tsunami communication On the source database server start a Tsunami server as shown in the following example If you do parallel upload then you need to start multiple Tsunami servers: This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 28 cd/mnt/expdisk1 tsunamid * On the destination Amazon EC2 instances start a Tsunami server as shown in the following example If you do multiple parallel f ile uploads then you need to start a Tsunami server on each Amazon EC2 bridge instance If you do not use parallel file uploads and if the migration is to an Oracle database on Amazon EC2 (not Amazon RDS) then you can avoid the Amazon EC2 bridge instanc e Instead you can upload the files directly to the Amazon EC2 instance where the database is running If the destination database is Amazon RDS for Oracle then the bridge instances are necessary because a Tsunami server cannot be run on the Amazon RDS s erver: cd /mnt/data_files tsunami tsunami> connect sourcedbserver tsunami> get * From this point forward the process differs for a database on Amazon EC2 versus a database on Amazon RDS The following sections show the processes for each service Next steps for a database on an Amazon EC2 instance If you used one or more Amazon EC2 bridge instances in the preceding steps then bring all the dump files from the Amazon EC2 bridge instances into the Amazon EC2 database instance The easiest w ay to do this is to detach the Amazon Elastic Block Store (Amazon EBS) volumes that contain the files from the Amazon EC2 bridge instances and connect them to the Amazon EC2 database instance Once all the dump files are available in the Amazon EC2 databa se instance use the Oracle Data Pump import feature to get the data into the destination Oracle database on Amazon EC2 as shown in the following example: This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 29 impdp demoreinv/demo full=y DIRECTORY=DPUMP_DIR dumpfile= reinvexp1%Udmpreinvexp2%Udmp reinvexp3%Udmp parallel=8 logfile=DPimplog ENCRYPTION_PASSWORD=encryption_password job_name=DPImp This imports all data into the database Check the log file to make sure everything went well and validate the data to confirm that all the data was migrated as expected Next steps for a database on Amazon RDS Because Amazon RDS is a managed service the Amazon RDS instance does not provide access to the file system However an Oracle RDS instance has an externally accessible Oracle directory object named DATA_PUMP_DIR You can copy Oracle Data Pump dump files to this directory by using an Oracle UTL_FILE package Amazon RDS supports S3 integration as well You could transfer files between the S3 bucket and Amazon RDS instance through S3 integration of RDS The S3 integration option is recommended when you want to transfer moderately large files to the RDS instance dba_directories Alternatively you can use a Perl script to move the files from the bridge instances to the DATA_PUMP_DIR of the Amazon RDS instance Preparing a bridge Instance To prepare a bridge instance make sure that Perl DBI and Oracle DBD modules are installed so that Perl can connect to the database You can use the following commands to verify if the modules are installed: $perl e 'use DBI; print $DBI::VERSION" \n";' $perl e 'use DBD::Oracle; print $DBD::Oracle::VERSION" \n";' If the modules are not already installed use the following process below to install them before proceeding further: 1 Downloa d Oracle Database Instant Client from the Oracle website and unzip it into ORACLE_HOME 2 Set up the environment variable as shown in the following example: This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Orac le Databases to AWS 30 $ export ORACLE_BASE=$HOME/oracle $ export ORACLE_HOME=$ORACLE_BASE/instantclient_11_2 $ export PATH=$ORACLE_HOME:$PATH $ export TNS_ADMIN=$HOME/etc $ export LD_LIBRARY_PATH=$ORACLE_HOME:$LD_LIBRARY_PATH 3 Download and unzip DBD::Oracle as shown in the following example: $ wget http://wwwcpanor g/authors/id/P/PY/PYTHIAN/DBD Oracle 174targz $ tar xzf DBDOracle174targz $ $ cd DBDOracle174 4 Install DBD::Oracle as shown in the following example: $ mkdir $ORACLE_HOME/log $ perl MakefilePL $ make $ make install Transferring files to an Amazon RDS instance To transfer files to an Amazon RDS instance you need an Amazon RDS instance with at least twice as much storage as the actual database because it needs to have space for the database and the Oracle Data Pump d ump files After the import is successfully completed you can delete the dump files so that space can be utilized It might be a better approach to use an Amazon RDS instance solely for data migration Once the data is fully imported take a snapshot of RDS DB Create a new Amazon RDS instance using the snapshot and then decommission the data migration instance Use a single Availability Zone instance for data migration This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 31 The following example shows a basic Perl script to transfer files to an Amazon RDS instance Make changes as necessary Because this script runs in a single thread it uses only a small portion of the network bandwidth You can run multiple instances of the script in parallel for a quicker file transfer to the Amazon RDS insta nce but make sure to load only one file per process so that there won’t be any overwriting and data corruption If you have used multiple bridge instances you can run this script from all of the bridge instances in parallel thereby expediting file trans fer into the Amazon RDS instance: # RDS instance info my $RDS_PORT=4080; my $RDS_HOST="myrdshostxxxus east1devords devamazonawscom"; my $RDS_LOGIN="orauser/orapwd"; my $RDS_SID="myoradb"; my $dirname = "DATA_PUMP_DIR"; my $fname= $ARGV[0]; my $data = ‘‘dummy’’; my $chunk = 8192; my $sql_open = "BEGIN perl_globalfh := utl_filefopen(:dirname :fname 'wb' :chunk); END;"; my $sql_write = "BEGIN utl_fileput_raw(perl_globalfh :data true); END;"; my $sql_close = "BEGIN utl_filefclos e(perl_globalfh); END;"; my $sql_global = "create or replace package perl_global as fh utl_filefile_type; end;"; my $conn = DBI >connect('dbi:Oracle:host='$RDS_HOST';sid='$RDS_SID';por t='$RDS_PORT$RDS_LOGIN '') || die ( $DBI::errstr " \n") ; This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 32 my $updated=$conn >do($sql_global); my $stmt = $conn >prepare ($sql_open); $stmt>bind_param_inout(":dirname" \$dirname 12); $stmt>bind_param_inout(":fname" \$fname 12); $stmt>bind_param_inout(":chunk" \$chunk 4); $stmt>execute() || die ( $DBI::errstr " \n"); open (INF $fname) || die " \nCan't open $fname for reading: $!\n"; binmode(INF); $stmt = $conn >prepare ($sql_write); my %attrib = ('ora_type’’24’); my $val=1; while ($val > 0) { $val = read (INF $data $chunk); $stmt>bind_param(":data" $data \%attrib); $stmt>execute() || die ( $DBI::errstr " \n"); }; die "Problem copying: $! \n" if $!; close INF || die "Can't close $fname: $! \n"; $stmt = $co nn>prepare ($sql_close); $stmt>execute() || die ( $DBI::errstr " \n"); You can check the list of files in the DBMS_DATAPUMP directory using the following query: SELECT * from table(RDSADMINRDS_FILE_UTILLISTDIR('DATA_PUMP_DIR')); This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 33 Once all files are s uccessfully transferred to the Amazon RDS instance connect to the Amazon RDS database as a database administrator (DBA) user and submit a job by using a PL/SQL script that uses DBMS_DATAPUMP to import the files into the database as shown in the following PL/SQL script Make any changes as necessary: Declare h1 NUMBER; begin h1 := dbms_datapumpopen (operation => 'IMPORT' job_mode => 'FULL' job_name => 'REINVIMP' version => 'COMPATIBLE'); dbms_datapumpset_parallel(handle => h1 degree => 8); dbms_datapumpadd_file(handle => h1 filename => 'IMPORTLOG' directory => 'DATA_PUMP_DIR' filetype => 3); dbms_datapumpset_parameter(handle => h1 name => 'KEEP_MASTER' value => 0); dbms_datapumpadd_file(handle => h1 filename => 'reinvexp1%Udmp' directory => 'DATA_PUMP_DIR' filetype => 1); dbms_datapumpadd_file(handle => h1 filename => 'reinvexp2%Udmp' directory => 'DATA_PUMP_DIR' filetype => 1); dbms_datapumpadd_file(handle => h1 filename => 'reinvexp3%Udmp' directory => 'DATA_PUMP_DIR' filetype => 1); dbms_data pumpset_parameter(handle => h1 name => 'INCLUDE_METADATA' value => 1); dbms_datapumpset_parameter(handle => h1 name => 'DATA_ACCESS_METHOD' value => 'AUTOMATIC'); dbms_datapumpset_parameter(handle => h1 name => 'REUSE_DATAFILES' value => 0 ); This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 34 dbms_datapumpset_parameter(handle => h1 name => 'SKIP_UNUSABLE_INDEXES' value => 0); dbms_datapumpstart_job(handle => h1 skip_current => 0 abort_step => 0); dbms_datapumpdetach(handle => h1); end; / Once the job is complete check the Amazon RDS database to make sure all the data has been successfully imported At this point you can delete all the dump files using UTL_FILEFREMOVE to reclaim disk space Data migration using Oracle external tables Oracle external tables are a feature of Oracle Database that allows you to query data in a flat file as if the file were an Oracle table The process for using Oracle external tables for data migration to AWS is almost exactly the same as the one used for Ora cle Data Pump The Oracle Data Pump based method is better for large database migrations The external tables method is useful if your current process uses this method and you don’t want to switch to the Oracle Data Pump based method Following are the mai n steps: 1 Move the external table files to RDS DATA_PUMP_DIR 2 Create external tables using the files loaded 3 Import data from the external tables to the database tables Depending on the size of the data file you can choose to either write the file directly to RDS DATA_PUMP_DIR from an on premises server or use an Amazon EC2 bridge instance as in the case of the Data Pump based method If the file size is large and you choose to use a bridge instance use compression and encryption on the files as well as Tsunami UDP or a WAN accelerator exactly as described for the Data Pump based migration To learn more about Oracle external tables see External Tables Concepts in the Oracle documentation This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 35 Data migration using Oracle RMAN If you are planning to migrat e the entire database and your destination database is self managed on Amazon EC2 you can use Oracle RMAN to migrate data Data migration by using Oracle Data Pump is faster and more flexible than data migration using Oracle RMAN; however Oracle RMAN is a better option for the following cases: • You already have an RMAN backup available in Amazon S3 that can be used If you are looking for options to migrate RMAN backups to S3 consider AWS Storage Gateway or AWS DataSync services • The database is very large (greater than 5 TB) and you are planning to use AWS Import/Export • You need to m ake numerous incremental data changes before switching over to the database on AWS Note : This method is for Amazon EC2 and VMware Cloud on AWS You cannot use this method if your destination database is Amazon RDS To migrate data using Oracle RMAN: 1 Create a full backup of the source database using RMAN 2 Encrypt and compress the files 3 Transport files to AWS using the most optimal method 4 Restore the RMAN backup to the destination database 5 Capture incremental backups from the source and apply them to the destination database until switchover can be performed This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 36 Creating a full backup of the source database Using RMAN Create a backup of the source database using RMAN: $ rman target=/ RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON; RMAN> BACKUP DATABASE PLUS ARCHIVELOG If you have a license for the compression and encryption option then you already have the RMAN backups created as encrypted and compressed files Otherwise after the backup files are created encrypt and compress them using tools such as ZIP 7 Zip or GZIP All subsequent actions occur on the server running the destination database Transporting files to AWS Depending on the size of the database and the time available for migration you can choose the most optimal method for file transportation to AWS For small files consider AWS DataSync For moderate to large databases between 100 GB to 5 TB Tsunami UDP is an option as described in Using Tsunami to upload files to EC2 You can achieve the same results using commercial third party WAN acceleration tools For very large databases over 5 TB consider using AWS Storage Gateway or AWS Snow Family devices for offline file transfer Migrating data to Oracle Database on AWS There are two ways to migrate data to a destination database You can create a new database and restore from the RMAN backup or you can create a duplicate database from the RMAN bac kup Creating a duplicate database is easier to perform To create a duplicate database move the transported files to a location accessible to the Oracle Database instance on Amazon EC2 Start the target instance in NOMOUNT mode Now use RMAN to connect to the destination database For this example we are not connecting to the source database or the RMAN catalog so use the following command : This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 37 $ rman AUXILIARY / DUPLICATE TARGET DATABASE TO DBONEC2 SPFILE NOFILENAMECHECK; The duration of this process varies based on the size of the database and the type of Amazon EC2 instance For better performance use Amazon Elastic Block Store (Amazon EBS) General Purpose ( SSD) volumes for the RMAN backup files For more information about SSD volume types see Introducing the Amazon EBS General Purpose (SSD) volume type Once the process is finished RMAN produces a completion message and you now have your duplicate instance After verification you can delete the Amazon EBS volumes containing the RMAN backup files We recommend that you take a snapshot of the volumes for later use before deleting them if needed Data replication using AWS Database Migration Service AWS Database Migration Service (AWS DMS) can support a number of migration and replication strategies including a bulk upload at a point in time a minimal downtime migration levera ging Change Data Capture (CDC) or migration of only a subset of the data AWS DMS supports sources and targets in EC2 RDS and on premise s Because no client install is required the following steps are the same for any combination of the above AWS DMS also offers the ability to migrate data between databases as easily as from Oracle to Oracle The following steps show how to migrate data between Oracle databases using AWS DMS and with minimal downtime: 1 Ensure supplemental logging is enabled on the sour ce database 2 Create the target database and ensure database backups and MultiAZ are turned off if the target is on RDS 3 Perform a no data export of the schema using Oracle SQL Developer or the tool of your choice then apply the schema to the target database This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Orac le Databases to AWS 38 4 Disable triggers foreign keys and secondary indexes (optional) on the target 5 Create a DMS replication instance 6 Specify the source and target endpoints 7 Create a “Migrate existing data and replicate ongoing changes” task mapping your source tables to your target tables (The default task includes all tables ) 8 Start the task 9 After the full load portion of the tasks is complete and the transactions reach a steady state enable triggers foreign keys and secondary indexes 10 Turn on backups and MultiAZ 11 Turn off any applications using the original source database 12 Let the final transactions flow through 13 Point any applications at the new database in AWS and start An alternative method is to use Oracle Data Pump for the initial load and DMS to replicate changes from the Oracle System Change Number ( SCN ) point where data dump stopped More details on using AWS DMS can be found in the documentation To improve the performance of DMS replication the schemas and tables can be grouped into multiple DMS tasks DMS tasks support wildcard entries for the names of the schemas and tables Data replication using Oracle GoldenGate Oracle GoldenGate is a tool for real time change data capture and replication Oracle GoldenGate creates trail files that contain the most recently changed data from the source database then pushes these files to the destination database You can use Oracle GoldenGate to perform minimal downtime data migration Oracle GoldenGate is a licensed software from Oracle You can also use it for nearly continuous da ta replication You can use Oracle GoldenGate with both Amazon RDS for Oracle and Oracle Database running on Amazon EC2 The following steps show how to migrate data using Oracle GoldenGate: 1 The Oracle GoldenGate Extract process extracts all the existing data for the first load Extract Pump and Replicat process refers to the GoldenGate Integrated capture mode This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 39 2 The Oracle GoldenGate Pump process transports the extracted data to the Replicat process running in Amazon EC2 3 The Replicat process appl ies the data to the destination database 4 After the first load the process runs continually to capture changed data and applies it to the destination database GoldenGate Replicat is a key part of the entire system You can run it from a server in the sou rce environment but AWS recommend s that you run the Replicat process in an Amazon EC2 instance within AWS for better performance This Amazon EC2 instance is referred to as a GoldenGate Hub You can have multiple GoldenGate Hubs especially if you are mig rating data from one source to multiple destinations Oracle GoldenGate replication data flow process This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 40 Reference architecture for EC2: Oracle GoldenGate replication from onpremis es to Oracle Database on Amazon EC2 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 41 Reference architecture for RDS: Oracle GoldenGate replication from onpremises to RDS Oracle Database on AWS Setting up Oracle GoldenGate Hub on Amazon EC2 To create an Oracle GoldenGate Hub on Amazon EC2 create an Amazon EC2 instance with a full client installation of Oracle DBMS 12c version 12203 and Oracle GoldenGate 12314 Additionally apply Oracle patch 13328193 For more information about instal ling GoldenGate see the Oracle GoldenGate documentation This GoldenGate Hub stores and processes all the data from your source database so make sure that there is enough storage available in this instance to store the trail files It is a good practice to choose the largest instance type that your GoldenGate license allows Use appropriate Amazon EBS storage volume types depending on the database change rate and replication performance The following process sets up a GoldenGate Hub on an Amazon EC2 instance This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 42 1 Add the following entry to the tnsnameora file to create an alias For more information about the tnsnameora file see the Oracle GoldenGate documentation $ cat /example/config/tnsnamesora TEST= (DESCRIPTION= (ENABLE=BROKEN) (ADDRESS_LIST= (ADDRESS=(PROTOCOL=TCP)(HOST=ec2 dns)(PORT=8200)) ) ( CONNECT_DATA= (SID=ORCL) ) ) 2 Next create subdirectories in the GoldenGate directory by using the Amazon EC2 command line shell and ggsci the GoldenGate command interpreter The subdirectories are created under the gg directory and include directories for parameter report and check point files: prompt$ cd /gg prompt$ /ggsci GGSCI> CREATE SUBDIRS 3 Create a GLOBALS parameter file using the Amazon EC2 command line shell Parameters that affect all GoldenGate processes are defined in the GLOBALS parameter file The following example creates the necessary file: prompt$ cd $GGHOME prompt$ vi GLOBALS CheckpointTable oggadm1oggchkpt This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 43 4 Configure the manager Add the following lines to the GLOBALS file and then start the manager by using ggsci : PORT 8199 PurgeOldExtracts /dirdat/* UseCheckpoints MINKEEPDAYS When you have completed this process the GoldenGate Hub is ready for use Next you set up the source and destination databases Setting up the source database for use with Oracle GoldenGate To replicate data to the destination database in AWS you need to se t up a source database for GoldenGate Use the following procedure to set up the source database This process is the same for both Amazon RDS and Oracle Database on Amazon EC2 1 Set the compatible parameter to the same as your destination database (for Amazon RDS as the destination) 2 Enable supplemental logging and force logging 3 Verify the database is in archivelog mode 4 Set ENABLE_GOLDENGATE_REPLICATION parameter to TRUE 5 Set the retention period for archived redo logs for the GoldenGate source database 6 Create a GoldenGate user account on the source database Setting up the destination database for use with Oracle GoldenGate The following steps must be performed on the target database for GoldenGate replication to work These steps are the same for both Amazon RDS and Oracle Database on Amazon EC2 1 Create a GoldenGate user account on the destination database 2 Grant the necessary privileges that are listed in the following example to the GoldenGate user: This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 44 CREATE SESSION ALTER SESSION CREATE CLUST ER CREATE INDEXTYPE CREATE OPERATOR CREATE PROCEDURE CREATE SEQUENCE CREATE TABLE CREATE TRIGGER CREATE TYPE SELECT ANY DICTIONARY CREATE ANY TABLE ALTER ANY TABLE LOCK ANY TABLE SELECT ANY TABLE INSERT ANY TABLE UPDATE ANY TABLE DELETE ANY TA BLE Working with the Extract and Replicat utilities of Oracle GoldenGate The Oracle GoldenGate Extract and Replicat utilities work together to keep the source and destination databases synchronized by means of incremental transaction replication using trail files All changes that occur on the source database are automatically detected by Extract and then formatted and transferred to trail files on the GoldenGate Hub on premises or on the Amazon EC2 instance After the initial load is completed the Replicat process reads the data from these files and replicates the data to the destination database nearly continuously Running the Extract process of Oracle GoldenGate The Extract process of Oracle GoldenGate retrieves converts and outputs data from the source database to trail files Extract queues transaction details to memory or to temporary disk storage When the transaction is committed to the source database Extract flushes all of the transaction details to a trail file for routing to the GoldenGate Hub on premises or on the Amazon EC2 instance and then to the destination database This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 45 The following process enables and starts the Extract process 1 First configure the Extract parameter file on the GoldenGate Hub The following example shows an Extract parameter file: EXTRACT EABC SETENV (ORACLE_SID=ORCL) SETENV (NLSLANG=AL32UT F8) USERID oggadm1@TEST PASSWORD XXXXXX EXTTRAIL /path/to/goldengate/dirdat/ab IGNOREREPLICATES GETAPPLOPS TRANLOGOPTIONS EXCLUDEUSER OGGADM1 TABLE EXAMPLETABLE; 2 On the GoldenGate Hub launch the GoldenGate command line interface (ggsci ) Log in to the source database The following example shows the format for logging in: dblogin userid <user>@<db tnsname> 3 Next add a checkpoint table for the database: add checkpointtable Add transdata to turn on supplemental logging for the database table: add trandata <user><table> • Alternatively you can add transdata to turn on supplemental logging for all tables in the database: add trandata <user>* 4 Using the ggsci command line use the following commands to enable the Extract process: add extract <extract name> tranlog INTEGRATED tranlog begin now This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Orac le Databases to AWS 46 add exttrail <pathtotrailfromthe paramfile> extract <extractname fromparamfile> MEGABYTES Xm 5 Register the Extract process with the database so that the archive logs are not deleted This lets you recover old uncommitted transactions if necessary To register the Extract process with the database use the following command: register EXTRACT <extract process name> DATABASE 6 To start the Extract process use the following command: start <extract process name> Running the Replicat process of Oracle GoldenGate The Replicat process of Oracle GoldenGate is used to push transaction information in the trail files to the destination database The following process enables and starts the Replicat pro cess 1 First configure the Replicat parameter file on the GoldenGate Hub (on premises or on an Amazon EC2 instance) The following listing shows an example Replicat parameter file: REPLICAT RABC SETENV (ORACLE_SID=ORCL) SETENV (NLSLANG=AL32UTF8) USERID oggadm1@TARGET password XXXXXX ASSUMETARGETDEFS MAP EXAMPLETABLE TARGET EXAMPLETABLE; 2 Launch the Oracle GoldenGate command line interface ( ggsci ) Log in to the destination database The following example shows the format for logging in: dblogin userid <user>@<db tnsname> This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 47 3 Using the ggsci command line add a checkpoint table Note that user indicates the Oracle GoldenGate user account not the owner of the destination table schema The following example creates a checkpoint table named gg_checkpoint : add checkpointtable <user>gg_checkpoint 4 To enable the Replicat process use the following command: add replicat <replicat name> EXTTRAIL <extract trail file> CHECKPOINTTABLE <user>gg_checkpoint 5 To start the Replicat process use the following command: start <replicat name> Transferring files to AWS Migrating databases to AWS require s the transfer of files to AWS There are multiple methods of transferring files to AWS This section describe s the methods you can adopt during the migrat ion process AWS DataSync AWS DataSync is an online data transfer service that can accelerate moving data between an onpremises storage system and AWS storage services such as S3 EFS or FSx for Windows File Server AWS DataSync agent connects to the on premises storage and copies data and metadata securely to AWS AWS DataSync is the recommended option when you have large volume of small files 100 MB or less AWS Storage Gateway AWS Storage Gateway is a service connecting an on premises software applianc e with cloud based storage to provide seamless and secure integration between an organization’s on premises IT environment and the AWS storage infrastructure The service allows you to securely store data in the AWS Cloud for scalable and cost effective st orage AWS Storage Gateway supports open standard storage protocols that work with your existing applications It provides low latency performance by maintaining This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 48 frequently accessed data on premises while securely storing all of your data encrypted in Amazon S3 or Amazon S3 Glacier AWS Storage Gateway works with moderate or large file sizes AWS Storage Gateway S3 File Gateway interface provides a Network File System/Server Messag e Block (NFS/SMB ) file share in your on premises environment They run a local VM in your on premises data center Files can be copied at the on premises location to this local file share These files are copied to the designated S3 bucket in AWS If your workload uses Windows OS you can use Amazon FSx File Gateway to copy files fr om on premises via SMB clients to the Amazon FSx for Windows File Server Amazon RDS integration with S3 You can use S3 integration to transfer files between an Amazon S3 bucket and an Amazon RDS instance The Amazon RDS instance accesses S3 bucket via a defined IAM role so you can have granular bucket or object level policies for the Amazon RDS instance S3 integration is useful when you have to use Oracle utilities like utl_file or datapump Amazon RDS Oracle rdsadmin package supports both upload and download from S3 buckets Tsunami UDP Tsunami UDP is an open source file transfer protocol that uses TCP control and UDP data for transfer over long dista nce networks at a very fast rate When you use UDP for transfer you gain more throughput than is possible with TCP over the same networks You can download Tsunami UDP from the Tsunami UDP Prot ocol page at SourceForgenet1 For moderate to large databases between 100 GB to 5 TB Tsunami UDP is an option as described in Using Tsunami to Upload Files to EC2 You can achieve the same results using commercial third party WAN acceleration tools For very large databases over 5 TB using AWS Snow Family devices might be a better option For smaller databases you can also use the Amazon S3 multipart upload capability to keep it simple and efficient AWS Snow Family AWS Snow Family offers a number of physical devices and capacity points transport up to exabytes of data into and out of AWS Snow Family devices are owned and managed by AWS and integrate with AWS security monitoring storage management and This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 49 computing capabilities For example AWS Snowball Edge has 80 TB of us able capacity and can be mounted as an NFS mount point in the onpremises location For smaller capacity AWS Snowcone offers 8 TB of storage and has the capability to run the AWS DataSync agent Conclusion This whitepaper described the preferred methods for migrating Oracle Database to AWS for both Amazon EC2 and Amazon RDS Depending on your business needs and your migration strategy you will probably use a combination of methods to migrate your database For best performance during migration it is critical to choose the appropriate level of resources on AWS especially for Amazon EC2 instances and Amazon EBS General Purpose (SSD) volume types Contributors Contributors to this document include : • Jayaraman Vellore Sampathkumar AWS Solution Architect – Database Amazon Web Services • Praveen Katari AWS Partner Solution Architect Amazon Web Services Further reading For additional information on data migration with AWS services consult the following resources: Oracle Database on AWS: • Advanced Architectures for Oracle Database on Amazon EC2 • Choosing the Operating System for Oracle Workloads on Amazon EC2 • Determining the IOPS Needs for Oracle Database on AWS • Best Practic es for Running Oracle Database on AWS • AWS Case Study: Amazoncom Oracle DB Backup to Amazon S3 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ strategiesmigratingoracledbtoaws/strategies migratingoracledbtoaws htmlAmazon Web Services Strategies for Migrating Oracle Databases to AWS 50 Oracle on AWS • Oracle and Amazon Web Services • Amazon RDS for Oracle AWS Database Migration Service ( AWS DMS) • AWS Database Mig ration Service Oracle licensing on AWS • Licensing Oracle Software in the Cloud Computing Environment AWS service details • Cloud Products • AWS Documentation Index • AWS Whitepapers & Guides AWS pricing information • AWS Pricing • AWS Pricing Calculator VMware Cloud on AWS • VMware Cloud on AWS Document version s Date Description January 27 2022 Update to text on page 30 for clarity October 8 2021 General updates and inclusion of AWS Snowcone and AWS DataSync services for migration August 2018 General updates December 2014 First publication
General
Use_AWS_WAF_to_Mitigate_OWASPs_Top_10_Web_Application_Vulnerabilities
This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Use AWS WAF to Mitigate OWASP ’s Top 10 Web Application Vulnerabilities July 2017 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers © 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own inde pendent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations con tractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreem ent between AWS and its customers This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Contents Introduction 1 Web Application Vulnerability Mitigation 2 A1 – Injection 3 A2 – Broken Authentication and Session Management 5 A3 – Cross Site Scripting (XSS) 7 A4 – Broken Access Control 9 A5 – Security Misconfiguration 12 A6 – Sensitive Data Exposure 15 A7 – Insufficient Attack Protection 16 A8 – Cross Site Request Forgery (CSRF) 19 A9 – Using Components with Known Vulnerabilities 21 A10 – Underprotected APIs 23 Old Top 2013 A10 – Unvalidated Redirects and Forwards 24 Companion CloudFormation Template 26 Conclusion 29 Contributors 30 Further Reading 30 Document Rev isions 31 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Abstract AWS WAF is a web application firewall that helps you protect your websites and web applications against various attack vectors at the HTTP protocol level This paper outlines how you can use the service to mitigate the application vulnerabilities that are defined in the Open Web Application S ecurity Project (OWASP) Top 10 list of most common categories of application security flaws It’s targeted at anyone who ’s tasked with protecting websites or applications and maintain ing their security posture and availability This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 1 Introduction The Open Web Application Security Project (OWASP) is an online community that creates freely available articles methodologies documentation tools and technologies in the field of web application secu rity1 They publish a ranking of the 10 most critical web application security flaws which are known as the OWASP Top 10 2 While the current version was published in 2013 a new 2017 Release Candidate version is currently available for public review The OWASP Top 10 represents a broad consensus of the most critical web application security flaws It’s a widely accepted metho dology for evaluat ing web application security and build mitigation strategies for websites and web based applications It outlines the top 10 areas where web applications are susceptible to attacks and where com mon vulnerabilities are found in such workl oads For any project aimed at enhancing the security profile of websites and web based applications it’s a great idea to understand the OWASP Top 10 and how it relate s to your own workloads This will help you implement effective mitigation strategies AWS WAF is a web application firewall (WAF) you can use to help protect your web applications from common web exploits that can affect application availability compromise security or consume excessive resources3 With AWS WAF you can allow or block requests to your web applications by defining customizable web security rules Also y ou can use AWS WAF to create rules to block common attack patterns as well as specific attack patterns targeted at your application AWS WAF works with Amazon CloudFront 4 our global content delivery network (CDN) service and the Application Load Balancer option for Elastic Load Balancing 5 By u sing these together you can analyze incoming HTTP requests apply a set of rules and take actions based on the matching of those rules AWS WAF can help you mitigate the OWASP Top 10 and othe r web application security vulnerabilities because attempts to exploit them often have common This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 2 detectable patterns in the HTTP requests You can write rules to match the patterns and block those requests from reaching your workloads However it ’s importan t to understand that using any web application firewall does n’t fix the underlying flaws in your web application It just provides an additional layer of defense which reduc es the risk of them being exploited This is especially useful in a modern develop ment environment where software evolves quickly Web Application Vulnerability Mitigation In April 2017 OWASP released the new iteration of the Top 10 for public comment The categories listed in the new proposed Top 10 are many of the same application fl aw categories from the 2013 Top 10 and past versions: A1 Injection A2 Broken Authentication and Session Management A3 Cross Site Scripting (XSS) A4 Broken Access Control (NEW) A5 Security Misconfiguration A6 Sensitive Data Exposure A7 Insufficient Attack Protection (NEW) A8 Cross Site Request Forgery (CSRF) A9 Using Components with Known Vulnerabilities A10 Underprotected APIs (NEW) The new A4 category consolidates the categories Insecure Direct Object References and Missing Function Level Access Controls from the 2013 Top 10 The previous A10 category Unvalidated Redirects and Forwards has been replaced with a new category that focus es on Application Programming Interface (API) security In this paper we discuss both old and new categories You can deploy AWS WAF to effectively mitigate a representative set of attack vectors in many of the categories above It can also be effective in other categories However the effectiveness depends on the specific workload that’s protected and the ability to derive recognizable HTTP request patterns Given that the attacks and exploits evolve constantly it ’s highly unlikely that any one web application firewall can mitigate all possible scenarios of an attack that target s flaws in any of these categori es This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 3 This paper describes recommendations for each category that you can implement easily to get started in mitigating application vulnerabilities At the end of the paper you can download an example AWS CloudFormation template that implement s some of the generic mitigations discussed here However be aware that the applicability of these rules to your particular web application can vary A1 – Injection Injection flaws occur when an application sends untrusted data to an interpreter6 Often the interpreter has its own domain specific language By using that language and inserting unsanitized data into requests to the interpreter an attacker can alter the intent of the requests and cause unexpected actions Perhaps the most well known and widespread injection flaws are SQL injection flaws These occur when input isn’t properly sanitized and escaped and the values are inserted in SQL statements directly If the values t hemselves contain SQL syntax statements the database query engine executes those as such This trigger s actions that weren’t originally intended with potentially dangerous consequences Credit: XKCD: Exploits of a Mom published by permission Using AWS WAF to Mitigate SQL injection attacks are relatively easy to detect in common scenarios They’ re usually detected by identifying enough SQL reserved words in the HTTP re quest components to signal a potentially valid SQL query However more complex and dangerous variants can spread the malicious query (and associated key words) over multiple input parameter or request components based on the This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 4 internal knowledge of how the application composes them in the backend These can be more difficult to mitigate using a WAF alone —you might need to address them at the application level AWS WAF has built in capabilities to match and mitigate SQL injection attacks You can use a SQL i njection match condition to deploy rules to mitigate such attacks7 The following table provides some common condition configurations: HTTP Request Component to Match Relevant Input Transformations to Apply Justification QUERY_STRING URL_DECODE HTML_ENTITY_DECODE The most common component to match Query string parameters are frequently used in database lookups URI URL_DECODE HTML_ENTITY_DECODE If your application is using friendly dirified or clean URLs then parameters m ight appear as part of the URL path segment —not the query string (they are later rewritten server side) For example: https://examplecom/products/<product_id>/reviews/ BODY URL_DECODE HTML_ENTITY_DECODE A common component to match if your application accepts form input A WS WAF only evaluates the first 8 KB of the body content HEADER: Cookie URL_DECODE HTML_ENTITY_DECODE A less common component to match But if your application uses cookie based parameters in database lookups consider matching on this component as wel l HEADER: Authorization URL_DECODE HTML_ENTITY_DECODE A less common component to match B ut if your application uses the value of this header for database validation consider matching on this component as well Additionally consider any other components of custom request headers that your application uses as parameters for database lookups You might want to match these components in your SQL injection match condition Other Considerations Predictably this det ection pattern is less effective if your workload legitimately allows users to compose and submit SQL queries in their requests For those cases consider narrowly scoping an exception rule that bypasses the SQL injection rule for specific URL patterns tha t are known to accept such input You can do that by using a SQL injection match condition as described in the This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 5 preceding table while listing the URLs that are excluded from checking by using a string match condition : 8 Rule action: BLOCK when request matches SQL Injection Match Condition and request does not match String Match Condition for excluded Uniform Resource Identifiers ( URI) You can also mitigate o ther types of injection vulnerabilities against other domain specific languages to varying degrees using string match conditions —by matching against kno wn key words assuming they ’re not also legitimate input values A2 – Broken Authentication and Session Management Flaws in the implementation of au thentication and session management mechanisms for web applications can lead to exposure of unwanted data stolen credentials or sessions and impersonation of legitimate users9 These flaws are difficult to mitigate using a WAF Broadly attackers rely on vulnerabilities in the way client server communication is implemented Or they target how session or authorization tokens are generated stored transferred reused timed out or invalidated by your application to obtain these credentials After they obt ain credentials attackers impersonate legitimate users and make requests to your web applications using those tokens For example if an attacker obtains the JWT token that authoriz es communication between your web client and the RESTful API they can impersonate that user until the token expires by launching HTTP requests with the illicitly obtained authorization token 10 Using AWS WAF to Mitigate Because illicit requests with stolen authorization credentials sessions or tokens are hard to distinguish from legitimate ones AWS WAF takes on a reactive role This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 6 After your own application security controls are able to detect that a token was stolen you can add that token to a blacklist AWS WAF rule This rule block s further requests with those signatures either permanently or until they expire You can also automate t his reaction to reduce mitigation time AWS WAF offers an API to interact with the service11 For this kind of solution you would use infrastructure specific or application specific monitoring and logging tools to look for patterns of compromise Automation of AWS WAF rules is discussed in greater detail under A7 – Insufficient Attack Protection To build a blacklist use a string match condition The following table provides some example patterns: HTTP Request Component to Match Relevant Input Transformations to Apply Relevant Positional Constraints Values to Match Against QUERY_STRING Avoid exposing session tokens in the URI or QUERY_STRING because they’re visible in the browser address bar or server logs and are easy to capture URI HEADER: Cookie URL_DECODE HTML_ENTITY_DECODE CONTAINS Session ID or access tokens HEADER: Authorization URL_DECODE HTML_ENTITY_DECODE CONTAINS JWT token or other bearer authorization tokens You can use various mechanisms to help detect leaked or misused session tokens or authorization tokens One mechanism is to k eep track of client devices and the location where a user commonly accesses your application from This gives you the ability to quickly detect if requests are made from an entirely different location or client device with the same tokens and blacklist them for safety AWS WAF also supports rate based rules Rate based rules trigger and block when the rate of requests from a n IP address exceeds a customer defined threshold (request s per 5min interval ) You can combine t hese rules with other predicates (conditions) that are available in AWS WAF You can enforce rate based limits to protect your applications’ authentication or authorization URLs and endpoints against brute force attack attempts to guess credentials You can also use a string match condition to match authentica tion URI paths of the application: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 7 HTTP Request Component to Match Relevant Input Transformations to Apply Relevant Positional Constraints Values to Match Against URI URL_DECODE HTML_ENTITY_DECODE STARTS_WITH /login (or relevant application specific URLs) This condition is then used inside a rate based rule with the desired threshold for requests originating from a given IP address : Rule action: BLOCK; rate limit: 2000; rate key: IP Only requests that match the string match condition are counted When that count exceeds 2000 requests per 5minute interval the originating IP address is blocked The minimum rate limit over a 5 minute you can set is 2000 requests A3 – Cross Site Scripting (XSS) Cross site scripting (XSS) flaws occur when web applications include user provided data in webpages that is sent to the browser without proper sanitization 12 If the data isn’t proper ly validat ed or escap ed an attacker can use those vectors to embed scripts inline frames or other objects into the rendered page (reflection) These in turn can be used for a variety of malicious purposes including stealing user credentials by using key loggers in order to install system malware The impact of the attack is magnified if that user data persist s server side in a data store and then delivered to a large set of other users Consider the example of a common but popular blog that accept s user commen ts If user comments aren’t correctly sanitized a malicious user can embed a malicious script in the comments such as: <script src=”https://malicious sitecom/exploitjs ” type=”text/javascript” /> The code then gets executed anytime a legitimate user lo ads that blog article This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 8 Using AWS WAF to Mitigate XSS attacks are relatively easy to mitigate in common scenarios because they require specific key HTML tag names in the HTTP request AWS WAF has built in capabilities to match and mitigate XSS attacks You can use a cross site scripting match condition to deploy rules to mitigate these attacks13 The following table provides some common condition configurations: HTTP Request Component to Match Relevant Input Transformations to Apply Justification BODY URL_DECODE HTML_ENTITY_DECODE A very common component to match if your application accepts form input AWS WAF only evaluates the first 8 KB of the body content QUERY_STRING URL_DECODE HTML_ENTITY_DECODE Recommended if query string parameters are reflected back into the webpage An example is the current page number in a paginated list HEADER: Cookie URL_DECODE HTML_ENTITY_DECODE Recommended if your applicatio n uses cookie based parameters that are reflected back on the webpage For example the name of the user who is currently logged in is stored in a cookie and embedded in the page header URI URL_DECODE HTML_ENTITY_DECODE Less common But if your application is using friendly dirified URLs then parameters m ight appear as part of the URL path segment not the query string (they are later rewritten server side) There are similar concerns as with query strings Other Considerations This de tection pattern is less effective if your workload legitimately allows users to compose and submit rich HTML such as the editor of a content management system (CMS)14 For those cases consider narrowly scoping an exception rule that bypasses the XSS rule for specific URL patterns that are known to accept such input as long as there are alternate mechanisms to protect those excluded URLs This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 9 Additionally some image or custom data formats and match condition configurations can trigger elevated levels of false positives Patterns that m ight indicate XSS attacks in HTML content can be legitimate in certain image or other data formats For example the SVG graphics format15 also allows a <script> tag You should narrowly tailor XSS rules to the type of request content that’s expected if HTML requests include other data formats A4 – Broken Access Control This category of application flaws new in the proposed 2017 Top 10 covers lack of or improper enforcement of restrictions on what authenticated users are allowed to do It consolidates the following categories from the 2013 Top 10: A4 – Insecure Direct O bject References and A7 – Missing Function Level Access Controls Application flaws in this category allow internal web application objects to be manipulated without the requestor’s access permissions being properly validated 16 Depending on the specific workload this can lead to exposure of unauthorized data manipulation of internal web application state path traversal and file inclusion Your applications s hould properly check and restrict access to individual modules components or functions in accordance with the authorization and authentication scheme used by the application Flaws in function level access controls occur most commonly in applications where access controls were n’t initially designed into the system but were added later17 These flaws also occur in applications that take a perimeter securit y approach to access validation In these cases access level can be validated once at the request initialization level However checks aren’t done further in the execution cycle as various subroutines are invoked This creates an implicit trust that the caller code can invoke other modules components or functions on behalf of the authorized user —which m ight not always hold true If your web application exposes different components to different users based on access level or subscription level then you should have authorization checks performed anytime those functions are invoked Consider the following examples of flawed implementations for illustration: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 10 1 A web application that allow s authenticat ed users to edit their profile generates a link to the profile editor page upon successful authentication: https://examplecom/edit/profile?user_id= 3324 The profile editor page however doesn’t specifically check that the parameter match es the current user This allow s any user who’s logged in to find information about any other user by simply iterating over the pool of user IDs This expos es unauthorized information : https://examplecom/edit/profile?user_id= 3325 2 Another example is a helper server side script that display s or allow s a download of files for a document sharing site It accepts the file name as a query string parameter: https://examplecom/downloadphp?file= mydocumentpdf Somewhere in the script code it passes the parameter to an internal file reading function: $content = file_get_contents(”/documents/path/{$_GET[file]}”); With no validation or sanitization and a vulnerable server configuration the file parameter can be exploited to have the server read and reflect any file For example : https://examplecom/do wnloadphp?file= %2F%2Fetc%2Fpasswd This is an example of both a directory traversal attack18 and a loca l file inclusion attack19 3 Consider a modular web application which is a pattern popular with content management systems to enable extensibility as well as applications using model viewcontroller (MVC) frameworks The entry point into the application is usually a router that invokes the right This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 11 controller based on the request parameters after processing common routines (such as authentication/authorization ): https://examplecom/?module= myprofile &view=display A legitim ate authenticated user invoking the URL above should be able to see their own profile A malicious user m ight authenticate and view their profile as well However they could attempt to alter the request URL and invoke an administrative module: https://e xamplecom/?module= usermanagement &view=display If that particular module doesn’t perform additional checks commensurate with the elevated privileges needed for administrators it enable s an attacker to gain access to unintended parts of the system Using AWS WAF to Mitigate You can use AWS WAF to mitigate certain attack vectors in this category of vulnerabilities Mitigating permission validation flaws is difficult using any WAF This is because the criteria that differentiate good requests from bad reque sts are found in the context of the user (requestor) session and privileges and rarely in the representation of the HTTP request itself However if malicious HTTP requests have a recognizable signature that legitimate requests don’t have you can write rules to match them Also you can use AWS WAF to filter dangerous HTTP request patterns that can indicate path traversal attempts or remote and local file inclusion (RFI/LFI) The table below illustrates a few such generic conditions: HTTP Request Component to Match Relevant Input Transformations to Apply Relevant Positional Constraints Values to Match Against QUERY_STRING URL_DECODE HTML_ENTITY_DECODE CONTAINS / :// URI URL_DECODE HTML_ENTITY_DECODE CONTAINS / :// This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 12 Also consider any other components of the HTTP request that your application uses to assemble or refer to file system paths As with the patterns suggested in the previously discussed categories these m ight be less effective if your application legitimately accepts URLs or compl ex file system paths If access to administrative modules components plugins or functions is limited to a known set of privileged users you can limit access to those functions by having them access ed from known source locations a whitelisting pattern: Other Considerations If the authorization claims are transmitted from the client as part of the HTTP request and encapsulated using JWT tokens (or something similar ) you can evaluate and compare them to the requested modules plugins components or functions Consider using AWS Lambda@Edge functions to prevalidate the HTTP requests and ensure that the relevant request parameters match the assertions and authorizations in the token20 You can use Lambda@Edge to reject nonconforming requests before they reach your backend servers A5 – Security Misconfiguration Misconfiguration of server parameters especially ones that have a security impact can happen at any level of your application stack21 This can apply to the operating system middleware platform services web server software application code or database layers of your applicatio n Default configurations This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 13 that ship with these components m ight not always follow security recommendations or be fit for every workload A few examples of security misconfigurations are: 1 Leaving the Apache web server ServerTokens Full (default) configuration in a production system This exposes the exact versions of the web server and associated modules in any server generated responses Attackers can use t his information to identify known vulnerabilities in your server software 2 Leaving default directory listings enabled on production web servers This allows malicious users to browse for files that are hosted by the web server 3 Application server configurations that return stack traces to end users on production systems in response to errors A ttackers can potentially discover the software components that are used They might be capable of reverse engineering your code and potentially discovering flaws 4 A previous feature in PHP Several years ago the default configuration for PHP allowed the r egistration of any request parameter (query string cookie based POST based) as a global variable Since then this feature has been deprecated and removed altogether Coupled with a vulnerable version of PHP it allowed for overwriting internal server va riables via HTTP requests: http://examplecom/ ?_SERVER[DOCUMENT_ROOT]=http://badco m/badhtm In a vulnerable application this embeds a malicious site address in the site that users visit Using AWS WAF to Mitigate You can use AWS WAF to mitigate attempts to exploit server misconfigurations in a variety of ways as long as the HTTP request patterns that attempt to exploit them are recognizable These patterns however are also application stack specific They depend on the operating syst em web server frameworks or languages your code leverages Generic rules that m ight not apply to your specific stack can be useful to you for nuisance protection because they block This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 14 requests that would otherwise be invalid so your backend servers don’t have to process them Here are a few strategies you can use :  You should block access to t he paths to administrative consoles configuration or status pages that are installed or enabled by default Alternatively you should restrict access to trusted sour ce IP addresses if they’re in use You should do this regardless of whether you specifically disabled or removed them (future actions might reactivate or reinstall them)  Protect against known attack patterns that are specific to your platform especially if you have legacy applications that rely on old platform behavior For example if you’re using PHP you might choose to block requests with a query string that contain s “_SERVER[ “ A whitelisting rule pattern similar to the one discussed previously for the Broken Access Control category can help with whitelisting specific subservices such as the administrative console of a WordPress site Other Considerations Also consider deploying Amazon Inspector to verify your software configurations22 It’s an automated security assessment service that helps improve the security and compliance of applications that are deployed on AWS Amazon Inspector automatically assesses applic ations for vulnerabilities or deviations from best practices To help you get started quickly Amazon Inspector includes a knowledge base of hundreds of rules that are mapped to common security best practices and vulnerability definitions Examples of buil tin rules include checking for the enablement of the remote root login or the installation of vulnerable software versions These rules are regularly updated by AWS security researchers In addition to detective controls you can provide the best protection against attacks in this category by implementing and maintaining secure configurations Configuration guidelines such as the CIS Benchmarks23 can help you deplo y secure configurations You can use s ervices such as AWS Config24 and Amazon This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 15 EC2 Systems Manager25 to help you track and manage configuration changes over time A6 – Sensitive Data Exposure Sensitive data exposure application flaws are typically harder to mitigate using web application firewalls26 These flaws commonly involve encryption processes that have been deficiently implemented Some examples are the lack of encryption on transport ed or stor ed sensitive data or using vulnerable legacy encryption ciphers 27 where malicious parties can intercept and decode your data Less commonly there can be flaws in application or protocol implementations or client browsers which can also lead to the exposure of sensitive data Exploits that ultimately lead to sensitiv e data exposure can span multiple OWASP categories A security misconfiguration that allows for the use of weak cryptographic algorithms leads to encryption downgrades and ultimately to an attacker being able to captur e the data stream to decode sensitiv e data Using AWS WAF to Mitigate Because the HTTP request is evaluated by AWS WAF after the incoming data stream has been decrypted its rules have no impact on enforcing good encryption hygiene at the connection level Less commonly if HTTP requests that can lead to sensitive data exposure have detectable patterns you can mitigate them by using string match conditions that target those patterns However t hese patterns are application specific and require more in depth knowledge of those applications For example if your application relies heavily on the SHA 1 hashing algorithm 28 malicious users m ight attempt to cause a hash collision using a pair of specially crafted PDF documents29 If your application allows uploads it would be beneficial to set up a rule that block s requests that contain portions of the base64 encoded representation of those files in the body When you attempt to b lock uploaded file signatures using AWS WAF take into account the limits the service imposes on such rules Uploaded data is base64 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 16 encoded Therefore your string match condition values have to be in base64 representation WAF searches the first 8 KB of the HTTP request body or less if the multi part encoding of the request body contains other field parameters that preced e the file data itself The relevant signature of the matched pattern can be up to 50 bytes in size Most standardized file formats al so have uniform preambles so the first several bytes of the file are common to all files of that format This forces you to derive the relevant signature from data further in the file Other Considerations You can use other services in the AWS ecosystem to provide c ontrol over the encryption protocols and ciphers that are used at the connection level:  For Elastic Load Balancing Classic Load Balancers 30 you can select predefined or customized security policies 31 These policies specify the protocols and ciphers that the load balancers can use to neg otiate secure connections with clients  For Elastic Load Balancing Application Load Balancers 32 you can select from a set of predefined security policies 33 As with the Classic Load Balancers these policies specify the allowed protocols and ciphers  For Amazon CloudFront 34 our content delivery network (CDN) service you can configure the minimum SSL protocol version you want to support35 as well as the SSL protocols you want CloudFront to use when it connect s to your custom origins A7 – Insufficient Attack Protection This category has bee n proposed for the new 2017 Top 10 and it reflects the reality that attack patterns can change quickly Malicious actors are able to adapt their toolsets quickly to exploit new vulnerabilities and launch large scale automated attacks to detect vulnerable systems This category focuses strongly on your ability to react in a timely manner to new attack vectors and abnormal request patterns or to application flaws that are discovered A broad range of attack vectors fall into this category with many overlap ping other categories To better understand them ask yourself the following questions: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 17  Can you enforce a certain level of hygiene at the request level? Are there HTTP request components that your application expects to exist or can’t operate without ?  Are you able to detect and recognize when your application is targeted with unusual request patterns or high volume? Do you have systems in place that can do that detection in an automated fashion? Are these systems capable of reacting to and blocking such un wanted traffic?  Are you able to detect when a malicious actor launches a directed targeted attack against your application trying to find and exploit flaws in your application ? Is this capability automated s o that you can react in near real time?  How fast can you deploy a patch to a discovered application flaw or vulnerability in your application stack and mitigate attacks against it? Do you have mechanisms in place to detect the effectiveness of the patch after deployment? Using AWS WAF to Mitigat e You can use AWS WAF to enforce a level of hygiene for inbound HTTP requests Size constraint conditions36 help you build rules that ensure that components of HTTP requests fall within specifically defined ranges You can use them to avoid processing abnormal requests An example is to limit the size of URIs or query strings to values that make sense to the application Also you can use them to require the pre sence of specific headers such as an API key for a RESTful API HTTP Request Component to Match Relevant Input Transformations to Apply Comparison Operator Size URI NONE GT (greater than) Maximum expected URI path size in bytes QUERY_STRING NONE GT Maximum expected size of the query string in bytes BODY NONE GT Maximum expected request body size in bytes HEADER :xapikey NONE LT (less than) 1 (or actual size of the API key) HEADER :cookie NONE GT Maximum expected cookie size in bytes This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 18 You can use t he example conditions described in this section with a blacklisting rule to reject requests that do n’t conform to the limits For detecting abnormal request patterns you can use AWS WAF’s ratebased rules that trigger when the rate of requests from an IP address exceeds your defined threshold (request s per 5min interval ) You can combine t hese rules with other predicates (conditions) that are available in AWS WAF For example you can combine a ratebased rule with a string match rule to only count requ ests with a particular user agent (say user agent =”abc”) This rule combination makes sure that only requests with user agent=”abc” are counted towards the determination of the rate violation by that IP address A key advantage of AWS WAF is its programmability You can configure and modify AWS WAF web access control lists (ACLs) rules and conditions by using a programmatic API at any time Any changes normally take effect within a minute even for our global se rvice that’s integrated with Amazon CloudFront Using the API you can build automated processes that are able to react to application specific abnormal conditions and take actions to block suspicious sources of traffic or notify operators for further inve stigation These automations can operate in real time invoked via trap or honeypot URL paths They can also be reactive based on the analysis and correlation of application log files and request patterns As mentioned earlier AWS provides a set of capab ilities called the AWS WAF Security Automations 37 These tools build upon the patterns highlighted previously They use several other AWS services most notably AWS Lambda for event driven computing and provide the following capabilities:38  Scanner and probe mitigation Maliciou s sources scan and probe internet facing web applications for vulnerabilities They send a series of requests that generate HTTP 4xx error codes You can use this history to help identify and block IP addresses from malicious sources This solution creates an AWS Lambda function that automatically parses access logs counts the number of bad requests from unique source IP addresses and updates AWS WAF to block further scans from those addresses This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 19  Known attacker origin mitigation A number o f organizations maintain reputation lists of IP addresses that are operated by known attackers such as spammers malware distributors and botnets This solution leverages the information in these reputation lists to help you block requests from malicious IP addresses  Bots and scraper mitigation Operators of publicly accessible web applications have to trust that the clients accessing their content identify themselves accurately and that they will use services as they’re intended However some automate d clients such as content scrapers or bad bots misrepresent themselves to bypass restrictions This solution implements a honeypot that helps you identify and block bad bots and scrapers In this solution the honeypot URL is listed in the ‘disallow’ se ction of the robotstxt file39 Any IP that access es this URL is therefore deemed malicious or noncompliant and is blacklisted Additionally there are ways you m ight be able to use AWS WAF to mitigate newly discovered application flaws or vulnerabilities in your stack They are discussed in greater detail later (see A9 – Using Components with Known Vulnerabilities ) A8 – Cross Site Request Forgery (CSRF) Cross site request forgery attacks predominantly target state changing functions in your web applications40 Consider any URL path and HT TP request that is intended to cause a state change ( for example form submission requests) Are there any mechanisms in place to ensure the user intended to take that action ? Without such mechanisms there isn’t an effective way to determine whether the r equest is legitimate and wasn’t forged by a malicious party Depending solely on client side attributes such as session tokens or source IP addresses isn’t an effective strategy because malicious actors can manipulate and replicate these values CSRF att acks take advantage of the fact that all details of a particular action are predictable (form fields query string parameters) Attacks are carried out in a way that take s advantage of other vulnerabilities such as cross site scripting or file inclusion —so users aren’t aware that the malicious action is triggered using their credentials and active session This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 20 Using AWS WAF to Mitigate You can mitigate CSRF attacks by doing the following :  Including unpredictable tokens in the HTTP request that triggers the action  Prompting users to authenticate for sending action requests  Introducing CAPTCHA challenges for sending action requests41 The first option is transparent to end users —forms can include unique tokens as hidden form fields custom headers or less desirably query string parameters The latter two options can introduce extra friction for end users and are generally only implemented for sensitive action requests Additionally CAPTCHAs can be circumvented by motivated actors and value combinations can also repeat42 As such they are a less desirable mitigation control f or CSRF You can use AWS WAF to check for the presence of those unique tokens For example if you decide to leverage a random universally unique identifier (UUIDv4)43 as the CSRF token and expect the value in a custom HTTP header named xcsrftoken you c an implement a size constraint condition : HTTP Request Component to Match Relevant Input Transformations to Apply Comparison Operator Size HEADER :xcsrftoken NONE EQ (equal to) 36 (bytes/ASCII characters canonical format) You would build a blocking rule where requests do not match this condition (negated) You can further narrow the scope of the rule by only matching POST HTTP requests for example Build a rule using the negated condition above and an additional string match condition for: HTTP Request Component to Match Relevant Input Transformations to Apply Relevant Positional Constraints Values to Match Against METHOD LOWERCASE EXACTLY post This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 21 Other Considerations Such rules are effective in filtering out CSRF attacks that circumvent your unique tokens However they are n’t effective at validating if the request carries invalid wrong stale or stolen tokens This is because HTTP request introspection lacks access to your application context Therefore you need a server side mechanism in your application to track the expected token or and ensure it’s used exactly once As an example the server sends a simple form to the client browser along with the embedded unique token as a hidden field At the same time it retains in the current server side session store the token value it expects the browser to supply when the user submits the form After the user submits the form a POST request is made to the s erver that includes the unique hidden token The server can safely discard any POST requests that don’t contain the expected value for the supplied session It should clear the value from the session store after it’s used up which ensur es that the value doesn’t get reused A9 – Using Components with Known Vulnerabilities Currently most web applications are highly composed They use frameworks and libraries from a variety of sources commercial or open source One challenge is keeping up to date with the m ost recent versions of these components This is further exacerbated when underlying libraries and frameworks use other components themselves Using components with known vulnerabilities is one of the most prevalent attack vectors44 They can help open up the attack surface of your web application to some of the other attack vectors discussed in this document The decision to use such components can be an active trade off to maintain compatibility with legacy code Or it’s possible to inadvertently use vulnerable components if you’re using components that depend on vulnerable subcomponents Mitigating vulnerabilities in such components is challenging be cause not all of them are reported and tracked by central clearinghouses such as Common Vulnerabilities and Exposures (CVE) 45 This puts the responsibility on the application developers to track the status of the components individually with the respective vendor author or provider Often vulnerabilities are addressed This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 22 in new versions of the components including new enhancements rather than fixing existing versions This a dds to the amount of work that developers have to perform to implement test and deploy the new versions of these components Using AWS WAF to Mitigate The primary mechanism to mitigate known vulnerabilities in components is to have a comprehensive proces s in place that addresses the lifecycle of such components You should h ave a way to identify and track the dependencies of your application and the dependencies of the underlying components Also you should have a monitoring process in place to track the security of these components Establish a software development process and policy that factors in the patch or release frequency of underlying component s and acceptable licensing models This can help you react quickly when component providers address vulnerabilities in their code Additionally you can use AWS WAF to filter and block HTTP requests to functionality of such components that you are n’t using in your applications This helps reduce the attack surface of those components if vulnerabilities are discovered in functionality you’re not using Does your application use server side included components? These are usually files that contain code that is loaded at runtime to assemble the HTTP response directly or indirectly Examples are Apache Server side Includes46 or code that load s via PHP include47 or require48 statements Other languages and frameworks have similar constructs It’s a best practice that these components are n’t deployed in the public web path on your web server in the first place However sometimes this recommendation is ignored for a variety o f reasons If the se components are present in the public web path these files aren’t designed to be accessed directly Nevertheless accessing them m ight expose internal application information or provide vectors of attack Consider using a string match condition to block access to such URL prefixes: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 23 HTTP Request Component to Match Relevant Input Transformations to Apply Relevant Positional Constraints Values to Match Against URI URL_DECODE STARTS_WITH /includes/ (or relevant prefix in your application) Similarly if your application uses third party components but uses only a subset of the functionality consider blocking exposed URL paths to functionality in those components that you don’t use by using similar AWS WAF conditions Other Considerations Penetration testing can also be an effective mechanism to discover vulnerabilities49 You can integrate it into your deployment and testing processes to both detect potential vulnerabilities as well as to ensure that deployed patches correctly mitigate the targeted application flaws The AWS Marketplace50 offers a wide range of vulnerability testing solutions from our partner vendors that are designed to help you get started easily and quickly Keep in mind that AWS requires customers to obtain permission51 before conducting such tests on resources that are hosted in AWS However some of the solutions available in the AWS Marketplace have been preauthorized and you can skip the authorization step They are marked as such in the solution t itle A10 – Underprotected APIs Another new category proposed for the 2017 Top 10 Underprotected APIs focuses on the target of potential attacks rather than the specific application flaw patterns that can be exploited This category recognizes the preva lence and anticipated future growth of APIs Currently entire applications are published that don’t have a user facing UI Instead they ’re available as APIs that other application publishers can use to build loosely coupled applications Many application s can have both user UIs and APIs whether those APIs are intended to be consumed by third parties or not This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 24 The attack vectors are often the same as discussed in categories A1 through A9 and are common with more traditional web applications that are end us er facing However because APIs are designed for programmatic access they do provide some additional challenges around security testing It’s easier to develop security test cases for u serfacing UIs that have simpler data structures and more discrete high delay steps due to human interaction In contrast APIs are often designed to work with more complex data structures and use a wider range of request frequencies and input values This is the case even if they ’re standardized and use wellknown protoc ols such as RESTful APIs52 or SOAP 53 Using AWS WAF to Mitigate Because the attack vectors for APIs are often the same as for traditional web applications the mitigation mechanisms discussed throughout this document also apply to APIs in a similar manner You can use AWS WAF in a variety of ways to mitigate these different attack vectors A key component that needs hardening is th e protocol parser itself With standardized protocols it ’s relatively easy to extrapolate the parser used With SOAP you use XML54—and with RESTful APIs you will likely use JSON 55 although you can also use XML YAML 56 or other formats Thus you can provide a critical success factor by effectively securing the configuration of the parser component and ensuring any vulnerabilities are mitigated As specific input patterns are discovered that would attempt to exploit flaws in the parser you m ight be able to use AWS WAF string match conditions or size restrictions for the request body to block such request patterns Old Top 2013 A10 – Unvalidated Red irects and Forwards Most websites and web applications contain mechanisms to redirect or forward users to other pages —internal or partner sites If these mechanisms don't validate the redirect or forward requests 57 it’s possible for malicious parties to use your legitimate domain to direct users to unwanted destinations These links use your legitimate and reputable domain to trick users This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 25 Consider the followin g example: You run a video sharing site and operate a URL shortener mechanism to enable users to share videos over text messages on mobile devices You use a script to create the URLs: https://examplecom/link?target= https%3A%2F%2Fexamplecom%2Fvideo%2Fe 439853%3Fpos%3D200%2 6mode%3Dfullscreen Users receive a URL like below and it takes them to the correct content page: https://examplecom/to? vrejkR6T If your link generator script doesn’t validate the acceptable input domains for the target page a malicious user can generate a link to an unwanted site: https://examplecom/link?target= https%3A%2F%2Fbadsitecom%2Fmalware They can then package it and send it to users as it would originate from your site: https://examplecom/to? br09FtZ1 Using AWS WA F to Mitigate The first step in mitigation is understanding where redirects and forwards occur in your application Discovering what URL request patterns cause redirects directly or indirectly and under what conditions helps you to build a list of poten tially vulnerable areas You should perform t he same analysis for any exposed third party components that your application uses in case they include redirect functionality If redirects and forwards are generated in response to HTTP requests from end users as in the example above then you can use AWS WAF to filter the requests and maintain a whitelist of domains that are trusted for redirect/forwarding This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 26 purposes You can use a string match condition that target s the HTTP request component where the target parameter is expected to match a whitelist In the example above the set of conditions might look like the following : 1 Whitelist of allowed domains for redirects (block requests if no list value is matched): HTTP Request Component to Match Relevant Input Transformations to Apply Relevant Positional Constraints Values to Match Against QUERY_STRING URL_DECODE CONTAINS target=https://examplecom QUERY_STRING URL_DECODE CONTAINS target=https://partnersitecom 2 Match only specific HTTP requests (to the redirector or router scripts): HTTP Request Component to Match Relevant Input Transformations to Apply Relevant Positional Constraints Values to Match Against URI URL_DECODE STARTS_WITH /link You should combine these conditions in a single AWS WAF rule which ensur es that both conditions have to be met for requests to be matched Companion CloudFormation Template We’ve prepared a n AWS CloudFormation template58 that contains a web ACL and the condition types and rules recommended in this document You can use the template to provision these resources with just a few clicks (full API support is also available) Note that the template is designed as a starting poi nt for you to build upon —and not as a production ready comprehensive set of rules For more information about working with CloudFormation templates see Learn Template Basics 59 The template is available at: https://s3us east2amazonawscom/awswaf owasp/owasp_10_baseyml The following example rules are included in the template: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 27  Bad sources of traffic A generic IP block list rule that allows you to block requests from identified bad sources of traffic  Broken access control : o A path traversal and file injection rule that detects common file system path traversal as well as local and remote file injection (LFI/RFI) patterns to block suspicious requests o A privileged module access restriction rule that limits access for administrative modules to known source IPs only You can configure one path prefix and source IP address through the template You can add additional patterns later by changing the conditions directly For more information see Creating and Configuring a Web Access Cont rol List 60  Broken authentication and session management A block list that allows you to block illicit requests that use stolen or hijacked authorization credentials such as JSON Web Tokens or session IDs  Cross site request forgery (CSRF) A rule that e nforces the existence of CSRF mitigating tokens  Cross site scripting (XSS) A rule that mitigates XSS attacks in common HTTP request components  Injection A SQL injection rule that mitigates SQL injection attacks in common HTTP request components  Insufficient attack protection A request size hygiene rule that allows you to configure the maximum size of various HTTP request components by using template parameters and block abnormal requests that exceed those maximum sizes  Security misconfiguratio ns A rule that detects some exploits of PHP specific server misconfigurations This rule m ight be less effective if you aren’t running PHP based applications but it can still be valuable to filter out unwanted automated HTTP requests that probe for PHP vulnerabilities  The use of components with known vulnerabilities A rule that restrict s access to publicly exposed URL paths that should n’t be directly accessible such as server side include components or component features that aren’t being used by your application This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 28 We’ve chosen to package the example AWS WAF rule set as a CloudFormation template because it provides an easy and repeatable way to provision the whole rule set with a few simple clicks The AWS CloudFormation documentation provides an easy tofollow walkthrough about how to create a stack 61 which is a collection of resources you can manage as a single unit Follow those instructions an d provide the template on the Select Template page Choose the option to Upload a template to Amazon S3 and provide the downloaded template from your local computer Otherwise you can simply paste the template URL ( https://s3us east2amazonawscom/awswaf owasp/owasp_10_baseyml ) in the Specify an Amazon S3 template URL box On the Specify Details page you can configure the template’s parameters A few key parameters to emphasize are:  Apply to WAF This parameter a llows you to select whether you want to use the template to deploy a rule set for Amazon CloudFront web distributions or Application Load Balancers (ALB) in the current region AWS WAF web ACLs get applied either to CloudFront web distributions or ALBs depending on which service you use to deliver your application The same stack can ’t be used for both but you can deploy multiple stacks You can also change this parameter’s value later by updating the stack  Rule effect This parameter determines the effect of you r rule set To minimize disruption we recommend that you start with a rule set that counts matching requests You can measure the effectiveness of your rules that wa y without impacting traffic When you ’re confident about the effectiveness of your rules you can deploy a stack that will block matching requests Continue following the AWS CloudFormation walkthrough instructions to deploy the stack After you deploy the stack you must associate the web ACL62 that’s deployed by the stack with your load balancer or web distribution resources to be able to use the rule set This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 29 Conclusion You can use AWS WAF to help you protect your websites and web applications against various attack vectors at the HTTP pro tocol level As we discussed in relation to OWASP security flaws AWS WAF is very effective at mitigating vulnerabilities to the extent that you can detect these attack patterns in HTTP requests Additionally you can enhance the capabilities of AWS WAF with other AWS services to build comprehensive security automations A set of such tools is available on our website in the form of the AWS WAF Security Automations 63 These tools enable you to build a set of protections that can react to the changing type of attacks your applications m ight be facing The solution provides several easy todeploy automations in the form of a CloudFormation template for rate based IP black listing reputation list IP blacklisting scanner and probe mitigation bot and scraper detection and blocking This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 30 Contributors The following individuals and organizations contributed to this document:  Vlad Vlasceanu Sr Solutions Architect Amazon Web Services  Sundar Jayashekar Sr Product Manager Amazon Web Services  William Reid Sr Manager Amazon Web Services  Stephen Quigg Solutions Architect Amazon Web Services  Matt Nowina Solutions Architect Amazon Web Services  Matt Bretan Sr Consultant A mazon Web Services  Enrico Massi Security Solutions Architect Amazon Web Services  Michael StOnge Cloud Security Architect Amazon Web Services  Leandro Bennaton Security Solutions Architect Amazon Web Services Further Reading For additional informatio n see the following:  AWS WAF Security Automations: https://awsamazoncom/answers/security/aws wafsecurity automations/  OWASP Top 10 – 2017 rc1: https://githubcom/OWASP/Top10/raw/master/2017/OWASP%20Top %2010%20 %202017%20RC1 Englishpdf  OWASP Top 10 – 2013 : https://wwwowasporg/indexphp/Top_10_2013 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 31 Document Revisions Date Description July 2017 First publication 1 https://wwwowasporg/ 2 https://wwwowasporg/indexphp/Category:OWASP_Top_Ten_Project 3 https://awsamazoncom/waf/ 4 https://awsamazoncom/cloudfront/ 5 https://awsamazoncom/elasticloadbalancing/applicationloadbalancer/ 6 https://wwwowasporg/indexphp/Top_10_2013 A1Injection 7 http://docsawsamazoncom/waf/latest/developerguide/web aclsql conditionshtml 8 http://docsawsamazoncom/w af/latest/developerguide/web aclstring conditionshtml 9 https://wwwowasporg/indexphp/Top_10_2013 A2 Broken_Authentication_and_Session_Managemen t 10 https://jwtio/ 11 http://docsawsamazoncom/waf/latest/APIReference/Welcomehtml 12 https://wwwowasporg/indexphp/Top_10_2013 A3Cross Site_Scripting_(XSS) 13 http://docsawsamazoncom/waf/latest/developergu ide/web aclxss conditionshtml 14 https://enwikipediaorg/wiki/Content_management_system 15 https://developermozillaorg/en US/docs/Web/SVG 16 https://wwwowasporg/indexphp/Top_10_2013 A4 Insecure_Direct_Object_References Notes This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 32 17 https://wwwowasporg/indexphp/Top_10_2013 A7 Missing_Function_Level_Access_Control 18 https://enwikipediaorg/wiki/Directory_traversal_attack 19 https://enwikipediaorg/wiki/File_inclusion_vulnerability 20 http://docsawsamazoncom/lambda/latest/dg/lambda edgehtml 21 https://wwwowasporg/indexphp/Top_10_2013 A5 Security_Misconfiguration 22 https://awsamazoncom/inspector/ 23 https://wwwcisecurityorg/cis benchmarks/ 24 https://awsamazoncom/config / 25 https://awsamazoncom/ec2/systems manager/ 26 https://wwwowasporg/indexphp/Top_10_2013 A6 Sensitive_Da ta_Exposure 27 https://enwikipediaorg/wiki/Cipher 28 https://enwikipediaorg/wiki/SHA 1 29 https://shatteredio/ 30 http://docsawsamazoncom/elasticloadbalancing/latest/classic/introduction html 31 http://docsawsamazoncom/elasticloadbalancing/latest/classic/elb ssl security policyhtml 32 http://docsawsamazoncom/ elasticloadbalancing/latest/application/introdu ctionhtml 33 http://docsawsamazoncom/elasticloadbalancing/latest/application/create https listen erhtml 34 http://docsawsamazoncom/AmazonCloudFront/latest/DeveloperGuide/Int roductionhtml 35 http://docsawsamazoncom/AmazonCloudFront/latest/DeveloperGuide/dis This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 33 tribution webvalues specifyhtml#DownloadDistValuesMinimumSSLProtocolVersion 36 http://docsawsamazoncom/waf/latest/developerguide/web aclsize conditionshtml 37 https://awsamazoncom/answers/security/aws wafsecurity automations/ 38 https://awsamazoncom/lambda/ 39 https://enwikipediao rg/wiki/Robots_exclusion_standard 40 https://wwwowasporg/indexphp/Top_10_2013 A8Cross Site_Request_Forgery_(CSRF) 41 https://enwikipediaorg/wiki/CAPTCHA 42 https://enwikipediaorg/wiki/CAPTCHA#Circumvention 43 https://enwikipediaorg/wiki/Universally_unique_identifier 44 https://wwwo wasporg/indexphp/Top_10_2013 A9 Using_Components_with_Known_Vulnerabilities 45 http://cvemitreorg/ 46 https://httpdapacheorg/docs/current/howto/s sihtml 47 http://phpnet/manual/en/functionincludephp 48 http://phpnet/manual/en/functionrequirephp 49 https://enwikipediaorg/wiki/Penetration_test 50 https://awsamazoncom/marketplace/ search/results?x=0&y=0&searchTerm s=vulnerability+scanner&page=1&ref_=nav_search_box 51 https://awsamazoncom/security/penetration testing/ 52 https://enwikipediaorg/wiki/Representational_state_transfer 53 https://enwikipediaorg/wiki/SOAP 54 https://wwww3org/XML/ 55 http://wwwjsonorg/ 56 http://yamlorg/ 57 https://wwwowasporg/indexphp/Top_10_2013 A10 Unvalidated_Redirects_and_Forwards This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities Page 34 58 https://awsamazoncom/cloudformation/ 59 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/getting startedtemplatebasicshtml 60 http://docsawsamazon com/waf/latest/developerguide/web aclhtml 61 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/cfn console create stackhtml 62 http://docsawsamazoncom/waf/latest/developerguide/web aclworking withhtml#web aclassociating cloudfront distrib ution 63 https://awsamazoncom/answers/security/aws wafsecurity automations/
General
WordPress_Best_Practices_on_AWS
Best Practices for WordPress on AWS Reference architecture for scalable WordPress powered websites First Published December 2014 Updated October 19 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2021 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 Simple deployment 1 Considerations 1 Available approaches 1 Amazon Lightsail 2 Improving performance and cost efficiency 4 Accelerating content delivery 4 Database caching 7 Bytecode caching 7 Elastic deployment 8 Reference architecture 8 Architecture components 9 Scaling the web tier 9 Stateless web tier 11 WordPress high availability by Bitnami on AWS Quick Starts 14 Conclusion 16 Contributors 16 Document revisions 16 Appendix A: Cl oudFront configuration 17 Origins and behaviors 17 CloudFront distribution creation 17 Appendix B: Plugins installation and configuration 20 AWS for WordPress plugin 20 Static content configuration 26 Appendix C: Backup and recovery 29 Appendix D: Deploying new plugins and themes 31 Abstract This whitepaper provides system administrators with specific guidance on how to get started with WordPress on A mazon Web Services (AWS) and how to improve both the cost efficiency of the deployment and the end user experience It also outlines a reference architecture that addresses common scalability and high availability requirements Amazon Web Services Best Practices for WordPres s on AWS Page 1 Introduction WordPress is an open source blogging tool and content management system (CMS) based on PHP and MySQL that is used to power anything from personal blogs to high traffic websites When the first version of WordPress was released in 2003 it was not built with modern elastic and scalable cloud based infrastructures in mind Through the work of the WordPress community and the release of various WordPress modules the capabilities of this CMS solution are constantly expanding Today it is possible to build a WordPress architecture that takes advantage of many of the benefits of the AWS Cloud Simple deployment For low traffic blogs or websites without strict high availability requirements a simple deployment of a single serve r might be suitable This deployment isn’t the most resilient or scalable architecture but it is the quickest and most economical way to get your website up and running Considerations This discussion starts with a single web server deployment There may be occasions when you outgrow it for example: • The virtual machine that your WordPress website is deployed on is a single point of failure A problem with this instance cause s a loss of service for your website • Scaling resources to improve performance can only be achieved by “vertical scaling ;” that is by increasing the size of the virtual machine running your WordPress website Available approaches AWS has a number of different options for provisioning virtual machines There are three main ways to host your own WordPress website on AWS: • Amazon Lightsail • Amazon Elastic Compute Cloud (Amazon EC2) • AWS Marketplace Amazon Web Services Best Practices for WordPres s on AWS Page 2 Amazon Lightsail is a service that enable s you to quickly launch a virtual private server (a Ligh tsail instance) to host a WordPress website Lightsail is the easiest way to get started if you don’t need highly configurable instance types or access to advanced networking features Amazon EC2 is a web service that provides resizable compute capacity so you can launch a virtual server within minutes Amazon EC2 provides more configuration and management options than Lightsail which is desirable in more advanced architectures You have administrative access to your EC2 instances and can install any software packages you choose including WordPress AWS Marketplace is an online store where you can find bu y and quickly deploy software that runs on AWS You can use oneclick deployment to launch preconfigured WordPress images directly to Amazon EC2 in your own AWS account in just a few minutes There are a number of AWS Marketplace vendors offering ready torun WordPress instances This whitepaper cover s the Lightsail option as the recommended implementation for a single server WordPress website Amazon Lightsail Lightsail is the easiest way to get started on AWS for developers small businesses students and other users who need a simple virtual private server (VPS) solution The service abstracts many of the more complex elements of infrastructure management away from the user It is therefore an ideal starting point if you have less infrastructure experience or when you need to focus on running your website and a simplified product is sufficient for your needs With Amazon Lightsail you can choose Windows or Linux/Unix operating systems and popular web applications including WordPr ess and deploy these with a single click from preconfigured templates As your needs grow you have the ability to smoothly step outside of the initial boundaries and connect to additional AWS database object storage caching and content distribution se rvices Selecting an Amazon Lightsail pricing plan A Lightsail plan defines the monthly cost of the Lightsail resources you use to host your WordPress website There are a number of plans available to co ver a variety of use Amazon Web Services Best Practices for WordPres s on AWS Page 3 cases with varying levels of CPU resource memory solid state drive (SSD) storage and data transfer If your website is complex you may need a larger instance with more resources You can achieve this by migrating your server to a larger plan using the web console or as described in the Amazon Lightsail CLI documentation Installing WordPress Lightsail provides templates for commonly used applications such as WordPress This template is a great starting point for running your own WordPress website as it comes preinstalled with most of the software you need You can install additional software or customize the software configuration by using the in browser terminal or your own SSH client or via the WordPress administration web i nterface Amazon Lightsail has a partnership with GoDaddy Pro Sites product to help WordPress customers easily manage their instances for free Lightsail WordPress virtual servers are preconfigured and optimized for fast performance and security making it easy to get your WordPress site up and running in no time Customers running multiple WordPress instances find it challenging and time consuming to update maintain and manage all of their sites With this integration you can easily manage your multiple WordPress instances in minutes with only a few clicks For more information about managing WordPress on Lightsail refer to Gettin g started using WordPress from your Amazon Lightsail instance Once you are finished customizing your WordPress website AWS recommend s that you take a snapshot of your instance A snapshot is a way to create a backup image of your Lightsail instance It is a copy of the system disk and also stores the original machine configuration (that is memory CPU disk size and data transfe r rate) Snapshots can be used to revert to a known good configuration after a bad deployment or upgrade This snapshot enable s you to recover your server if needed but also to launch new instances with the same customizations Recovering from failure A single web server is a single point of failure so you must ensure that your website data is backed up The snapshot mechanism described earlier can also be used for this purpose To recover from failure you can restore a new instance from your m ost recent Amazon Web Services Best Practices for WordPres s on AWS Page 4 snapshot To reduce the amount of data that could be lost during a restore your snapshots must be as recent as possible To minimize the potential for data loss ensure that snapshots are taken on a regular basis You can schedule automatic sna pshots of your Lightsail Linux/Unix instances For instructions refer to Enabling or disabling automatic snapshots for instances or disks in Amazon Lightsail AWS recommend s that you use a static IP —a fixed public IP address that is dedicated to your Lightsail account If you need to replace your instance with another one you can reassign the static IP to the new instance In this way you don’t have to reconfigure any external systems (such as DNS records) to point to a new IP address every time you want to replace your instance Improving performance and cost efficiency You may eventually outgrow your single server deployment In this case you may need to consider options for improving your website’s performance Before migrating to a multi server scalable deployment (discuss ed later in this white paper ) there are a number of performance and cost efficiencies you can apply These are good practices that you should follow anyway even if you do move to a multi server architecture The following sections introduce a number of options that can improve aspects of your WordPress website’s performance and scalability Some can be applied to a single server deployment whereas others take advantage of the scalability of multiple servers Many of those modifications require the use of one or more WordPress plugins Although various options are available W3 Total Cache is a popular choice that combines many of those modifications in a single plugin Accelerating content delivery Any WordPress website needs to deliver a mix of static and dynamic content Static content includes images JavaScript files or style sheets Dynamic content includes anything generated on the server side using the WordPress PHP code ; for example elements of your site that are generated from the database or personalized to each viewer An important aspect of the end user experience is the network latency involved when delivering the previous content to users around the world Accelerating the delivery of the previous content improve s the end user experience especially users geographically Amazon Web Services Best Practices for WordPres s on AWS Page 5 spread across the globe This can be achieved with a Content Delivery Network (CDN) such as Amazon CloudFront Amazon CloudFront is a web service that provi des an easy and cost effective way to distribute content with low latency and high data transfer speeds through multiple edge locations across the globe Viewer requests are automatically routed to a suitable CloudFront edge location to lower the latency If the content can be cached (for a few seconds minutes or even days) and is already stored in a particular edge location CloudFront delivers it immediately If the content should not be cached has expired or isn’t currently in that edge location CloudFront retrieves content from one or more sources of truth referred to as the origin(s) (in this case the Lightsail instance) in the CloudFront configuration This retrieval takes place over optimized network connections which work to speed up the delivery of content on your website Apart from improving the end user experience the model discussed also reduces the load on your origin servers and has the potential to create s ignificant cost savings Static content offload This includes CSS JavaScript and image files —either those that are part of your WordPress themes or those media files uploaded by the content administrators All these files can be stored in Amazon Simple S torage Service (Amazon S3) using a plugin such as W3 Total Cache and served to users in a scalable and highly available manner Amazon S3 offers a highly scalable reliable and low latency data storage infrastruc ture at low cost which is accessible via REST APIs Amazon S3 redundantly stores your objects not only on multiple devices but also across multiple facilities in an AWS Region providing exceptionally high levels of durability This has the positive sid e effect of offloading this workload from your Lightsail instance and letting it focus on dynamic content generation This reduces the load on the server and is an important step towards creating a stateless architecture (a prerequisite before implementing automatic scaling ) You can subsequently configure Amazon S3 as an origin for CloudFront to improve delivery of those static assets to users around the world Although WordPress isn’t integrated with Amazon S3 and CloudFront out of the box a variety of plugins add support for these services (for example W3 Total Cache) Amazon Web Services Best Practices for WordPres s on AWS Page 6 Dynamic content Dynamic content includes the output of server side WordPress PHP scripts Dynamic content can also be served via CloudFront by configuring the WordPress websit e as an origin Since dynamic content include s personalized content you need to configure CloudFront to forward certain HTTP cookies and HTTP headers as part of a request to your custom origin server CloudFront uses the forwarded cookie values as part of the key that identifies a unique object in its cache To ensure that you maximize the caching efficiency configure CloudFront to forward only those HTTP cookies and HTTP headers that actually vary the content (not cookies that are only used on the client side or by thirdparty applications for example for web analytics) Whole website delivery via Amazon CloudFront The preceding figure includes two origins: one for static content and another for dynamic content For implementation details refer to Appendix A: CloudFront configuration and Appendix B: Plugins insta llation and configuration CloudFront uses standard cache control headers to identify if and for how long it should cache specific HTTP responses The same cache control headers are also used by web browsers to decide when and for how long to cache content locally for a more optimal end user experience (for example a css file that is already downloaded will not be redownloaded every time a returning visitor views a page) You can configure cache control headers on the web server level (for example via htaccess files or modifications of the httpdconf file) or install a WordPress plugin (for example W3 Total Cache) to dictate how those headers are set for both static and dynamic content Amazon Web Services Best Practices for WordPres s on AWS Page 7 Database caching Database caching can significantly reduce latency and increase throughput for read heavy application workloads like WordPress Application performance is improved by storing frequently accessed pieces of data in memory for low latency access (for example the results of input/output ( I/O)intensive databa se queries) When a large percentage of the queries is served from the cache the number of queries that need to hit the database is reduced resulting in a lower cost associated with running the database Although WordPress has limited caching capabilitie s out ofthebox a variety of plugins support integration with Memcached a widely adopted memory object caching system The W3 Total Cache plugin is a good example In the simplest scenarios you install Memcached on your web server and capture the result as a new snapshot In this case you are responsible for the administrative tasks associated with running a cache Another option is to take advantage of a managed service such as Amazon ElastiCache and avoid that operational burden ElastiCache makes it easy to deploy operate and scale a distributed in memory cache in the cloud You can find information about how to connect to your ElastiCache cluster nodes in the Amazon ElastiCache documentation If you are using Lightsail and wish to access an ElastiCache cluster in your AWS account privately you can do so by usin g VPC peering For instructions to enable VPC peering refer to Set up Amazon VPC peering to work with AWS resources outside of Amazon Lightsail Bytecode caching Each time a PHP script is run it gets parsed and compiled By using a PHP bytecode cache the output of the PHP compilation is stored in RAM so the same script doesn’t have to be compiled again and again This reduces the overhead related to running PHP scripts resulting in better performance and lower CPU requirements A bytecode cache can be installed on any Lightsail instance that hosts WordPress and can greatly reduce its load For PHP 55 and later AWS recommend s the use of OPcache a bundled extension with that PHP version Note that OPcache is enabled by default in the Bitnami WordPress Lightsail template so no further action is required Amazon Web Services Best Practices for WordPres s on AWS Page 8 Elastic deploymen t There are many scenarios where a single server deployment may not be sufficient for your website In these situations you need a multi server scalable architecture Reference architecture The Hosting WordPress on AWS reference architecture available on GitHub outlines best practices for deploying WordPress on AWS and includes a set of AWS CloudFormation templates to get you up and running quickly The following architecture is based on tha t reference architecture The rest of this section review s the reasons behind the architectural choices The based AMI in the GitHub was changed from Amazon Linux1 to Amazon Linux2 in July 2021 However deployment templates at S3 were not changed yet It is recommended to use templates at GitHub if there is an issue to deploy the reference architecture with templates at S3 Reference architecture for hosting WordPress on AWS Amazon Web Services Best Practices for WordPres s on AWS Page 9 Architecture components The preceding reference architecture illustrates a complete best practice deployment for a WordPress website on AWS • It starts with edge caching in Amazon CloudFront (1) to cache content close to end users for faster delivery • CloudFront pulls static content from an S3 bucket (2) and dynamic content from an Application Load Balancer (4) in front of the web instances • The web instances run in an Auto Scaling group of Amazon EC2 instances (6) • An ElastiCache cluster (7) caches frequently queried data to speed up responses • An Amazon Aurora MySQL instance (8) hosts the WordPress database • The WordPress EC2 instances access s hared WordPress data on an Amazon EFS file system via an EFS Mount Target (9) in each Availability Zone • An Internet Gateway (3) enable s communication between resources in your VPC and the internet • NAT Gateways (5) in each Availability Zone enable EC2 ins tances in private subnets (App and Data) to access the internet Within the Amazon VPC there exist two types of subnets: public ( Public Subnet ) and private ( App Subnet and Data Subnet ) Resources deployed into the public subnets will receive a public IP address and will be publicly visible on the internet The Application Load Balancer (4) and a bastion host for administration are deployed here Resources deployed into the private subnets receive only a pri vate IP address and are not publicly visible on the internet improving the security of those resources The WordPress web server instances (6) ElastiCache cluster instances (7) Aurora MySQL database instances (8) and EFS Mount Targets (9) are all deplo yed in private subnets The remainder of this section covers each of these considerations in more detail Scaling the web tier To evolve your single server architecture into a multi server scalable architecture you must use five key components: Amazon Web Services Best Practices for WordPres s on AWS Page 10 • Amazon EC2 instances • Amazon Machine Images (AMIs) • Load balancers • Automatic scaling • Health checks AWS provides a wide variety of EC2 instance types so you can choose the best server configuration for both performance and cost Generally speaking the compute optimiz ed (for example C4) instance type may be a good choice for a WordPress web server You can deploy your instances across multiple Availability Zones within a n AWS Region to increase the reliability of the overall architecture Because you have complete con trol of your EC2 instance you can log in with root access to install and configure all of the software components required to run a WordPress website After you are done you can save that configuration as an AMI which you can use to launch new instances with all the customizations that you've made To distribute end user requests to multiple web server nodes you need a load balancing solution AWS provides this capability through Elastic Load Balancing a highly available service that distributes traffic to multiple EC2 instances Because your website is serving content to your users via HTTP or HTTPS we recommend that you make use of the Application Load Balancer an application layer load balancer with content routing and the ability to run multiple WordPress websites on different domains if required Elastic Load Balancing supports distribution of requests across multiple Availability Zones within an AWS Region You can also configure a health check so that the Application Load Balancer automatically stops sending traffic to individual instances that have failed (for example due to a hardware problem or software crash) AWS recommend s using the WordPress admin login page (/wploginphp ) for the health check because this page confirm s both that the web server is running and that the web server is confi gured to serve PHP files correctly You may choose to build a custom health check page that checks other dependent resources such as database and cache resources For more information refer to Health checks for your target groups in the Application Load Balancer Guide Amazon Web Services Best Practices for WordPres s on AWS Page 11 Elasticity is a key characteristic of the AWS Cloud You can launch more compute capacity (for example web servers) when yo u need it and run less when you don't AWS Auto Scaling is an AWS service that helps you automate this provisioning to scale your Amazon EC2 capacity up or down according to conditions you define with no need for manual intervention You can configure AWS Auto Scaling so that the number of EC2 instances you’re using increases seamlessly during demand spikes to maintain performance and decreases automatically when traffic diminishes so as to minimize costs Elastic Load Balancing also supports dynamic addition and removal of Amazon EC2 hosts from the load balancing rotation Elastic Load Balancing itself also dynamically increases and decreases the load balancing capacity to adjust to traffic demands with no manual intervention Stateless web tier To take advantage of multiple web servers in an automatic scaling configuration your web tier must be stateless A stateless application is one that needs no knowledge of previous interactions and stores no session information In the case of WordPress this means that all end users receive the same response regardless of which web server processed their request A stateless application can scale horizontally since any request can be serviced by any of the a vailable compute resources (web server instances) When that capacity is no longer required any individual resource can be safely terminated (after running tasks have been drained) Those resources do not need to be aware of the presence of their peers —all that is required is a way to distribute the workload to them When it comes to user session data storage the WordPress core is completely stateless because it relies on cookies that are stored in the client’s web browser Session storage isn’t a concern unless you have installed any custom code (for example a WordPress plugin) that instead relies on native PHP sessions However WordPress was originally designed to run on a single server As a result it stores some data on the server’s local file system When running WordPress in a multi server configuration this creates a problem because there is inconsistency across web servers For example if a user uploads a new image it is only stored on one of the servers This demonstrates why we need to improve the default WordPress running configuration to move important data to shared storage The best practice architecture Amazon Web Services Best Practices for WordPres s on AWS Page 12 has a database as a separate layer outside the web server and makes use of shared storage to store user uploads themes and plugin s Shared storage (Amazon S3 and Amazon EFS) By default WordPress stores user uploads on the local file system and so isn’t stateless Therefore you need to move the WordPress installation and all user customizations (such as configuration plugins them es and user generated uploads) into a shared data platform to help reduce load on the web servers and to make the web tier stateless Amazon Elastic File System (Amazon EFS) provides scalable network fil e systems for use with EC2 instances Amazon EFS file systems are distributed across an unconstrained number of storage servers enabling file systems to grow elastically and enabling massively parallel access from EC2 instances The distributed design of Amazon EFS avoids the bottlenecks and constraints inherent to traditional file servers By moving the entire WordPress installation directory onto an EFS file system and mounting it into each of your EC2 instances when they boot your WordPress site and all its data is automatically stored on a distributed file system that isn’t dependent on any one EC2 instance making your web tier completely stateless The benefit of this architecture is that you don’t need to install plugins and themes on each new insta nce launch and you can significantly speed up the installation and recovery of WordPress instances It is also easier to deploy changes to plugins and themes in WordPress as outlined in the Deployment considerations section of this document To ensure optimal performance of your website when running from an EFS file system check the recommended configuration settings for Amazon EFS and OPcache on the AWS Reference Architecture for WordPress You also have the option to offload all static assets such as image CSS and JavaScript files to an S3 bucket with CloudFront caching in front The mechanism for doing this in a multi server architecture is exactly the same as for a single server architecture as discussed in the Static content section of this whitepaper The benefits are the same as in the single server architecture —you can offload the work associated with serving your static assets to Amazon S3 and CloudFront enabling your web servers to focus on generating dynamic content onl y and serve more user requests per web server Amazon Web Services Best Practices for WordPres s on AWS Page 13 Data tier (Amazon Aurora and Amazon ElastiCache) With the WordPress installation stored on a distributed scalable shared network file system and static assets being served from Amazon S3 you can focus your attention on the remaining stateful component: the database As with the storage tier the database should not be reliant on any single server so it cannot be hosted on one of the web servers Instead host the WordPress database on Amazon Aurora Amazon Aurora is a MySQL and PostgreSQL compatible relational database built for the cloud that combines the performance and availability of high end commercial databases with the simplicity and cost effectivenes s of open source databases Aurora MySQL increases MySQL performance and availability by tightly integrating the database engine with a purpose built distributed storage system backed by SSD It is faulttolerant and self healing replicates six copies of your data across three Availability Zones is designed for greater than 9999% availability and nearly continuously backs up your data in Amazon S3 Amazon Aurora is designed to automatically detect database crashes and restart without the need for crash recovery or to rebuild the database cache Amazon Aurora provides a number of instance types to suit different application profiles including memory optimized and burstable instances To improve the performance of your database you can select a large instance type to provide more CPU and memory resources Amazon Aurora automatically handles failover between the primary instance and Aurora Replicas so that your applications can resume database operations as quickly as possible without manual administrative intervention Failover typically takes less than 30 seconds After you have created at least one Aurora Replica connect to your primary instance using the cluster endpoint to enable your application to automatically fail over in the event the primary instance fails You can create up to 15 low latency read replica s across three Availability Zones As your database scales your database cache will also need to scale As discussed previously in the Database caching section of this document ElastiCache has features to scale the cache across multiple nodes in an ElastiCache cluster and across multiple Availability Zones in a Region for improved availability As you scale your ElastiCache cluster ensure that you configure your caching plugin to connect using the configuration endpoint so that WordPress can use new cluster nodes as they are added and stop Amazon Web Services Best Practices for WordPres s on AWS Page 14 using old cluster nodes as they are removed You must also set up your web servers to use the ElastiCache Cluster Client for PHP and update your AMI to store this change WordPress high availability by Bitnami on AWS Quick Start s Quick Starts are built by AWS solutions architects and partners to help you deploy popular technologies on AWS based on AWS best practices for security and high availability These accelerators reduce hundreds of manual procedures into just a few steps so you can build your production environment quickly and start using it immediately Each Quick Start includes AWS CloudFormation templates that automate the deployment and a guide that discusses the architecture and provides step bystep deployment instructions WordPress High Availability by Bitnami on AWS Quick Starts sets up the following configurable environment on AWS: • A highly available architecture that spans two Availability Zones* • A virtual private cloud (VPC) configured with publ ic and private subnets according to AWS best practices This provides the network infrastructure for your deployment* • An internet gateway to provide access to the internet This gateway is used by the bastion hosts to send and receive traffic* • In the pub lic subnets managed NAT gateways to allow outbound internet access for resources in the private subnets* • In the public subnets Linux bastion hosts in an Auto Scaling group to allow inbound Secure Shell (SSH) access to EC2 instances in public and private subnets* • Elastic Load Balancing to distribute HTTP and HTTPS requests across multiple WordPress instances • In the private subnets EC2 instances that host the WordPress application on Apache These instances are provisioned in an Auto Scaling group to en sure high availability • In the private subnets Amazon Aurora DB instances administered by Amazon Relational Database Service (Amazon RDS) Amazon Web Services Best Practices for WordPres s on AWS Page 15 • In the private subnets Amazon Elastic File System (Amazon EFS) to share assets (such as plugins themes and images ) across WordPress instances • In the private subnets Amazon ElastiCache for Memcached nodes for caching database queries * The template that deploys the Quick Start into an existing VPC skips the tasks marked by asterisks and prompts you for your existing VPC configuration WordPress high availability architecture by Bitnami A detailed description of deploying WordPress High Availability by Bitnami on AWS is beyond the scope of this document For configuration and options refer to WordPress High Availability by Bitnami on AWS Amazon Web Services Best Practices for WordPres s on AWS Page 16 Conclusion AWS presents many architecture options for running WordPress The simplest option is a single server installatio n for low traffic websites For more advanced websites site administrators can add several other options each one representing an incremental improvement in terms of availability and scalability Administrators can select the features that most closely m atch their requirements and their budget Contributors Contributors to this document include : • Paul Lewis Solutions Architect Amazon Web Services • Ronan Guilfoyle Solutions Architect Amazon Web Services • Andreas Chatzakis Solutions Architect Manager Ama zon Web Services • Jibril Touzi Technical Account Manager Amazon Web Services • Hakmin Kim Migration Partner Solutions Architect Amazon Web Services Document revisions Date Description October 19 2021 Updated to modify Reference Architecture and AWS for WordPress plugin October 2019 Updated to include new deployment approaches and AWS for WordPress plugin February 2018 Updated to clarify Amazon Aurora product messaging December 2017 Updated to include AWS services launched since first publication December 2014 First publication Amazon Web Services Best Practices for WordPres s on AWS Page 17 Appendix A: CloudFront configuration To get optimal performance and efficiency when using Amazon CloudFront with your WordPress website it’s important to configure the website correctly for the different types of content being served Origins and behaviors An origin is a location where CloudFront sends requests for content that it distributes through the edge locations Depending on your implemen tation you can have one or two origins One for dynamic content (the Lightsail instance in the single server deployment option or the Application Load Balancer in the elastic dep loyment option ) using a custom origin You may have a second origin to direct CloudFront to for your static content In the preceding reference architecture this is an S3 bucket When you use Amazon S3 as an orig in for your distribution you need to use a bucket policy to make the content publicly accessible Behaviors enable you to set rules that govern how CloudFront caches your content and in turn determine how effective the cache is Behaviors enable you to control the protocol and HTTP methods your website is accessible by They also enable you to control whether to pass HTTP headers cookies or query strings to your backend (and if so which ones) Behaviors apply to specific URL path patte rns CloudFront distribution creation Create a CloudFront web distribution by following the Distribution the default Origin and Behavior automatically created will be used for dynamic content Create four additional behaviors to further customize the way both static and dynamic requests are treated The following table summarizes the configuration properties for the five behaviors You can also skip this manual configuration and use the AWS for WordPress plugin covered in Appendix B: Plugins Installation and Configuration which is the easiest way to configure CloudFront to accelerate your WordPress site Amazon Web Services Best Practices for WordPres s on AWS Page 18 Table 1: Summary of configuration propert ies for CloudFront behaviors Property Static Dynamic (admin) Dynamic (front end) Paths (Behaviors) wp content/* wp includes/* wpadmin/* wploginphp default (*) Protocols HTTP and HTTPS Redirect to HTTPS HTTP and HTTPS HTTP methods GET HEAD ALL ALL HTTP headers NONE ALL Host CloudFront Forwarded Proto CloudFront IsMobile Viewer CloudFront IsTablet Viewer CloudFront IsDesktop Viewer Cookies NONE ALL comment_* wordpress_* wpsettings* Query Strings YES (invalidation) YES YES For the default behavior AWS recommend s the following configuration: • Allow the Origin Protocol Policy to Match Viewer so that if viewers connect to CloudFront using HTTPS CloudFront connect s to your origin using HTTPS as well achieving end toend encryption Note that this requires you install a trusted SSL certificate on the load balancer For details refer to Requiring HTTPS for Communication Between CloudFront and Your Custom Origin • Allow all HTTP methods since the dynamic portions of the website require both GET and POST requests (for example to support POST for the comment submission forms) Amazon Web Services Best Practices for WordPres s on AWS Page 19 • Forward only the cookies that vary the WordPress output for example wordpress_* wpsettings* and comment_* You must extend that list if you have installed any plugins that depend on other cookies not in the list • Forward only the HTTP headers that affect the output of WordPress for example Host CloudFront Forwarded Proto CloudFront isDesktop Viewer CloudFront isMobileViewer and CloudFront isTablet Viewer : o Host allows multiple WordPress websites to be hosted on the same origin o CloudFront Forwarded Proto allows different versions of pages to be cached depending on whether they are accessed via HTTP or HTTPS o CloudFront isDesktopViewer CloudFront isMobileViewer and CloudFront isTabletViewer allow you to customize the output of your themes based on the end user’s device type • Forward all the query strings to cache based on their values because WordPress relies on these they can also be used to invalidate cached objects If you w ant to serve your website under a custom domain name (not *cloudfrontnet ) enter the appropriate URIs under Alternate Domain Names in the Distribution Settings In this case you also need an SSL certificate for your custom domain name You can request SSL certificates via the AWS Certificate Manager and configure them against a CloudFront distribution Now cr eate two more cache behaviors for dynamic content: one for the login page (path pattern: wploginphp ) and one for the admin dashboard (path pattern: wp admin/* ) These two behaviors have the exact same settings as follows: • Enforce a Viewer Protocol Policy of HTTPS Only • Allow all HTTP methods • Cache based on all HTTP headers • Forward all cookies • Forward and cache based on all query strings The reason behind this configuration is that this section of the website is highly personalized and typically has just a few users so caching efficiency isn’t a primary concern The focus is to keep the configuration simple to ensure maximum compatibility with any installed plugins by passing all cookies and headers to the origin Amazon Web Services Best Practices for WordPres s on AWS Page 20 The AWS for WordPress plugin covered in Appendix B automatically creates a CloudFront distribution that meets the preceding configuration By default WordPress stores everything locally on the web server which is block storage ( Amazon EBS) for single server deployment and file sto rage ( Amazon EFS) for elastic deployment In addition to reducing storage and data transfer costs moving static asset s to Amazon S3 offers scalability data availability security and performance There are several plugins that make it easy to move static content to Amazon S3; one of them is W3 Total Cache also covered in Appendix B Appendix B: Plugins installation and configuration AWS for WordPress plugin The AWS for WordPress plugin is the only WordPress plugin written and actively maintained by AWS It enable s customers to easily configure Amazon CloudFront and AWS Certificate Manager (ACM) to WordP ress websites for enhanced performance and security The plugin uses Amazon Machine Learning (ML) services to translate content into one or more languages produce s audio versions of each translation and read s WordPress websites through Amazon Alexa devices The plugin is installed already in WordPress High Availabili ty by Bitnami on AWS Quick Start Plugin installation and configuration To install the plugin : 1 To use the AWS for WordPress plugin you must create an IAM user for the plugin An IAM user is a person or application under an AWS account that has permission to make API calls to AWS services Amazon Web Services Best Practices for WordPres s on AWS Page 21 2 You need an AWS Identity and Access Management (IAM) role or an IAM user to control authentication and authorization for your AWS account To prevent unauthorized users from g aining these permissions protect the IAM user's credentials Treat the secret access key like a password; store it in a safe place and don't share it with anyone Like a password rotate the access key periodically If the secret access key is accidentally leaked delete it immediately Then you can create a new access key to use with the AWS for WordPress plugin 3 In the Plugins menu of the WordPress admin panel search AWS for WordPress and choose Install Now 4 If the plugin installation is not working there may be a user permission problem Connect to WordPress web server and complete the following instructions to solve the issue a Open The wpconfigphp file in the WordPress install directory and write the following code a t the end of the wpconfigphp file: define('FS_METHOD''direct'); b Launch the following command to give writing permission: Amazon Web Services Best Practices for WordPres s on AWS Page 22 chmod 777 <WordPress install directory>/wp content Warning : Keeping the writing permission as 777 is risky If the permission is kept as 777 anyone can edit or delete this folder Change the writing permission into 755 or below after completing the plugin work c If the reference architecture is used the WordPress install directory is `/var/www/wordpress/<site directory> ` A detailed description of all AWS for WordPress settings is beyond the scope of this document For configuration and options refer to Getting started with the AWS for WordPress plugin Amazon CloudFront and AWS Certificate Manager To set up CloudFront and AWS Certificate Manager : 1 On the plugin menu choose CloudFront and enter the following parameters : o Origin domain name: DNS domain of the HTTP origin server where CloudFront get s your website's content (such as examplecom ) o Alternate domain name (CNAME): domain name that your visitors use for the accelerated website experience AWS recommend s using 'www' in front of the domain (such as wwwexamplecom ) 2 Choose Initiate Setup to start the configuration The plugin automatically request s an SSL certificate for the CNAME via ACM once you valida te the ACM token by updating the DNS records with the CNAME entries the plugin will create a CloudFront distribution that meets the best practices defined in Appendix A Note: AWS for WordPress plugin requires HTTPS for communication between CloudFront and your custom origin Make sure your origin has an SSL certificate valid for the Origin domain name For more informat ion refer to Using HTTPS with CloudFront Amazon Web Services Best Practices for WordPres s on AWS Page 23 Translate and vocalize your content The AWS for WordPress plugin enable s you to autom atically translate text in different languages and convert the written content into multilingual audio formats These features are powered by Amazon Machine Learning services Amazon Polly is a service that tu rns text into lifelike speech With dozens of voices across a variety of languages you can select the ideal voice and build engaging speech enabled applications that work in many different countries Use the plugin to create audio files in any of the voic es and languages supported by Amazon Polly Your visitors can stream the audio at their convenience using inline audio players and mobile applications By default the plugin stores new audio files on your web server You can choose to store the files on A mazon S3 or on Amazon CloudFront Users have the same listening experience regardless of where you store your audio files Only the broadcast location changes: • For audio files stored on the WordPress server files are broadcast directly from the server • For files stored in an S3 bucket files are broadcast from the bucket • If you use CloudFront the files are stored on Amazon S3 and are broadcast with CloudFront Broadcast location Amazon Web Services Best Practices for WordPres s on AWS Page 24 Amazon Transla te is a machine translation service that delivers fast high quality and affordable language translation Providing multilingual content represents a great opportunity for site owners Although English is the dominant language of the web native English s peakers are a mere 26% of the total online audience By offering written and audio versions of your WordPress content in multiple languages you can meet the needs of a larger international audience You can configure the plugin to do the following: • Automa tically translate into different languages and create audio recordings of each translation for new content upon publication or choose to translate and create recordings for individual posts • Translate into different languages and create audio recordings fo r each translation of your archived content • Use the Amazon Pollycast RSS feed to podcast audio content Overview of content translation and text to speech Amazon Web Services Best Practices for WordPres s on AWS Page 25 Podcasting with Amazon Pollycast With Amazon Pollycast feeds your visitors can listen to your audio content using standard podcast applications RSS 20 compliant Pollycast feeds provide the XML data needed to aggregate podcasts by popular mobile podcast applications such as iTunes and podcast directories When you install the AWS for WordPress plugin you will find option to enable generation of XML feed in the Podcast configuration tab There you will also find option to configure multiple optional properties After enabling the functionality you will r eceive a link do the feed Reading your content through Amazon Alexa devices You can extend WordPress websites and blogs through Alexa devices This opens new possibilities for the creators and authors of websites to reach an even broader audience It also makes it easier for people to listen to their favorite blogs by just asking Alexa to read them To expose the WordPress website to Alexa you must enable : • AWS for WordPress plugin • The text tospeech and Amazon Pollycast functionalities Th ese functionali ties generate an RSS feed on your WordPress site which is consumed by Amazon Alexa • Amazon S3 as the default storage for your files in text tospeech it’s important that your website uses a secure HTTPS connection to expose its feed to Alexa The followin g diagram presents the flow of interactions and components that are required to expose your website through Alexa Amazon Web Services Best Practices for WordPres s on AWS Page 26 Flow of interactions required to expose WordPress websites through Alexa 1 The user invokes a new Alexa skill for example by saying: “Alexa ask Demo Blog for the latest update ” The skill itself is created using one of the Alexa Skill Blueprints This enable s you to expose your skill through Alexa devices even if you don’t have deep technical knowledge 2 The Alexa skill analyzes the call and RSS feed that was generated by the AWS for WordPress plugin and then returns the link to the audio version of the latest article 3 Based on the link provided by the feed Alexa reads the article by playing the audio file saved on Amazon S3 Refer to the plugin page on WordPress marketplace for a detailed step bystep guide for installing and configuring the plugin and its functiona lities Static content configuration By default WordPress stores everything locally on the web server which is block storage ( Amazon EBS) for single server deployment and file storage ( Amazon EFS) for elastic deployment In addition to reducing storage and data transfer costs moving static asset to Amazon S3 offers scalability data availability security and performance In this example the W3 Total Cache (W3TC) plugin is used to store static assets on Amazon S3 However there are other plugins avail able with similar capabilities If you want to use an alternative you can adjust the following steps accordingly The steps only refer to features or settings relevant to this example A detailed description of all settings is beyond the scope of this docu ment Refer to the W3 Total Cache plugin page at wordpressorg for more information Amazon Web Services Best Practices for WordPres s on AWS Page 27 IAM user creation You need to create an IAM user for the WordPress plugin to store static assets in Amazon S3 For instructions refer to Creating an IAM User in Your AWS Account Note: IAM roles provide a better way of managing access to AWS resources but at the time of writing the W3 Total Cache plugin does not support IAM roles Take a n ote of the user security credentials and store them in a secure manner – you need these credentials later Amazon S3 bucket creation 1 First create an Amazon S3 bucket in the AWS Region of your choice For instructions refer to Creating a bucket Enable static website hosting for the bucket by following the guide for Configu ring a static website on Amazon S3 2 Create an IAM policy to provide the IAM user created previously access to the specified S3 bucket and attach the policy to the IAM user For instructions to create the following policy refer to Managing IAM Policies { "Version": "2012 1017 " "Statement": [ { "Sid": "Stmt1389783689000" "Effect": "Allow" "Principal": "*" "Action": [ "s3:DeleteObject" "s3:GetObject" "s3:GetObjectAcl" "s3:ListBucket" "s3:PutObject" "s3:PutObjectAcl" ] "Resource": [ "arn:aws:s3:::wp demo" "arn:aws:s3:::wp demo/*" ] } Amazon Web Services Best Practices for WordPres s on AWS Page 28 ] } 3 Install and activate the W3TC plugin from the WordPress a dmin panel 4 Browse to the General Settings section of the plugin’s configuration and ensure that both Browser Cache and CDN are enabled 5 From the dropdown list in the CDN configuration choose Origin Push: Amazon CloudFront (this option has Amazon S3 as its origin) 6 Browse to the Browser Cache section of the plugin’s configuration and enable the expires cache control and entity tag (ETag) headers 7 Also activate the Prevent caching of objects after settings change option so that a new query string is generated and appended to objects whenever any settings are changed 8 Browse to the CDN section of the plugin’s configuration and enter the security credentials of the IAM user you created earlier as well as the name of the S3 bucket 9 If you are serving your website via the CloudFront URL enter the distribution domain name in the relevant box Otherwise enter one or more CNAMEs for your custom domain name(s) 10 Finally export the media library and upload the wp includes theme files and custo m files to Amazon S3 using the W3TC plugin These upload functions are available in the General section of the CDN configuration page Static origin creation Now that the static files are stored on Amazon S3 go back to the CloudFront configuration in the CloudFront console and configure Amazon S3 as the origin for static content To do that add a second origin pointing to the S3 bucket you created for that purpose Then create two more cache behaviors one for each of the two folders (wpcontent and wpincludes ) that should use the S3 origin rather than the default origin for dynamic content Configure both in the same manner: • Serve HTTP GET requests only • Amazon S3 does not vary its output based on cookies or HTTP headers so you can improve caching efficiency by not forwarding them to the origin via CloudFront Amazon Web Services Best Practices for WordPres s on AWS Page 29 • Despite the fact that these behaviors serve only static content (which accepts no parameters) you will forward query strings to the origin This is so that you can use query strings a s version identifiers to instantly invalidate for example older CSS files when deploying new versions For more information refer to the Amazon Clou dFront Developer Guide Note: After adding the static origin behaviors to your CloudFront distribution check the order to ensure the behaviors for wpadmin/* and wploginphp have higher precedence than the behaviors for static content Otherwise you may see strange behavior when accessing your admin panel Appendix C: Backup and recovery Recovering from failure in AWS is faster and easier to do compared to traditional hosting environments For example you can launch a replacement instance in minutes in response to a hardware failure or you can make use of automated failover in many of our managed services to negate the impact of a reboot due to routine maintenance However you still need to ensure you are backing up the right data in order to successfu lly recover it To reestablish the availability of a WordPress website you must be able to recover the following components: • Operating system (OS) and services installation and configuration (Apache MySQL and so on ) • WordPress application code and configuration • WordPress themes and plugins • Uploads (for example media files for posts) • Database content (posts comments and so on ) AWS provides a variety of methods for backing up and restoring your web application data and assets This whitepaper previously discussed making use of Lightsail snapshots to protect all data stored on the instance’s local storage If your WordPress website runs off the Lightsail instance only regular Lightsail snapshots should be sufficient for you to recover your WordPres s website in its entirety However you will still lose any changes Amazon Web Services Best Practices for WordPres s on AWS Page 30 applied to your website since the last snapshot was taken if you do restore from a snapshot In a multi server deployment you need to back up each of the components discussed earlier usin g different mechanisms Each component may have a different requirement for backup frequency for example the OS and WordPress installation and configuration will change much less frequently than user generated content and therefore can be backed up les s frequently without losing data in the event of a recovery To back up the OS and services installation and configuration and the WordPress application code and configuration you can create an AMI of a properly configured EC2 instance AMIs can serve tw o purposes: to act as a backup of instance state and to act as a template when launching new instances To back up the WordPress application code and configuration you need to make use of AMIs and also Aurora backups To back up the WordPress themes and plugins installed on your website back up the Amazon S3 bucket or the Amazon EFS file system they are stored on • For themes and plugins stored in an S3 bucket you can enable Cross Region Replication so that all objects uploaded to your primary bucket are automatically replicated to your backup bucket in another AWS Region Cross Region Replication requires that Versioning is enabled on both your source and destination buckets which provides you with an additional layer of protection and enable s you to revert to a previous version of any given object in your bucket • For themes and plugins stored on an EFS file system you can create an AWS Data Pipeline to copy data from your production EFS file system to another EFS file system as outlined in the documentation page Using AWS Backup w ith Amazon EFS You can also back up an EFS file system using any backup application you are already familiar with • To back up user uploads you should follow the steps outlined earlier for backing up the WordPress themes and plugins Amazon Web Services Best Practices for WordPres s on AWS Page 31 • To back up database co ntent you need to make use of Aurora backup Aurora backs up your cluster volume automatically and retains restore data for the length o f the backup retention period Aurora backups are nearly continuous and incremental so you can quickly restore to any point within the backup retention period No performance impact or interruption of database service occurs as backup data is being written You can specify a backup retention period from 1 to 35 days You can also create manual database snapshots which persist until you delete them Manual databa se snapshots are useful for long term backups and archiving Appendix D: Deploying new plugins and themes Few websites remain static In most cases you will periodically add publicly available WordPress themes and plugins or upgrade to a newer WordPress v ersion In other cases you will develop your own custom themes and plugins from scratch Any time you are making a structural change to your WordPress installation there is a certain risk of introducing unforeseen problems At the very least take a backu p of your application code configuration and database before applying any significant change (such as installing a new plugin) For websites of business or other value test those changes in a separate staging environment first With AWS it’s easy to replicate the configuration of your production environment and run the whole deployment process in a safe manner After you are done with your tests you can simply tear down your test environment and stop paying for those resources Later this whit epaper discuss es some WordPress specific considerations Some plugins write configuration information to the wp_options database table (or introduce database schema changes) whereas others create configuration files in the WordPress installation directory Beca use we have moved the database and storage to shared platforms these changes are immediately available to all of your running instances without any further effort on your part When deploying new themes in WordPress a little more effort may be required If you are only making use of Amazon EFS to store all your WordPress installation files then new themes will be immediately available to all running instances However if you are offloading static content to Amazon S3 you must process a copy of these t o the right bucket location Plugins like W3 Total Cache provide a way for you to manually initiate that task Alternatively you could automate this step as part of a build process Amazon Web Services Best Practices for WordPres s on AWS Page 32 Because theme assets can be cached on CloudFront and at the browser you need a way to invalidate older versions when you deploy changes The best way to achieve this is by including some sort of version identifier in your object This identifier can be a query string with a date time stamp or a random string If you use the W3 Total Cache plugin you can update a media query string that is appended to the URLs of media files
General
Serverless_Architectures_with_AWS_Lambda
This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Serverless Architectures with AWS Lambda Overview and Best Practices November 2017 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers © 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contract ual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Contents Introduction What Is Serverless? 1 AWS Lambda —the Basics 2 AWS Lamb da—Diving Deeper 4 Lambda Function Code 5 Lambda Function Event Sources 9 Lambda Function Configuration 14 Serverless Best Practices 21 Serverless Architecture Best Practices 21 Serverless Development Best Practices 34 Sample Serverless Architectures 42 Conclusion 42 Contributors 43 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Abstract Since its introduction at AWS re:Invent in 2014 AWS Lambda has continued to be one of the fast est growing AWS services With it s arrival a new application architecture paradigm was created— referred to as serverless AWS now provides a number of different services that allow you to build full application stacks without the need to manage any servers Use cases like web or mobile backends realtime data processing chatbots and virtual assistants Internet of Things (IoT) backends and more can all be fully serverless For the logic layer of a serverless application you can execute your business logic using AWS Lambda Developers and organizations are finding that AWS Lambda is enabling much faster development speed and experimentation than is possible when deploying applications in a traditional server based environment This whitepaper is meant to provide you with a broad overview of AWS Lamb da its features and a slew of recommendations and best practices for building your own serverless applications on AWS This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 1 Introduction What Is Serverless ? Serverless most often refers to serverless applications Serverless applications are ones that don't require you to provision or manage any servers You can focus on your core product and business logic instead of responsibilities like operating system ( OS) access control OS patching provisioning right sizing scaling and availability By building your application on a serverless platform the platform manages these responsibilities for you For service or platform to be considered serverless it shoul d provide the following capabilities : • No server management – You don’t have to provision or maintain any servers There is no software or runtime to install maintain or administer • Flexible scaling – You can scale your application automatically or by adjusting its capacity through toggling the units of consumption (for example throughput memory) rather than units of individual servers • High availability – Serverless applications have built in availability and fault to lerance You don't need to architect for these capabilities because the services running the application provide them by default • No idle capacity – You don't have to pay for idle capacity There is no need to pre provision or over provision capacity for things like compute and storage T here is no charge when your code is n’t running The AWS Cloud provides many different services that can be components of a serverless application These include capabilities for : • Compute – AWS Lambda 1 • APIs – Amazon API Gateway2 • Storage – Amazon Simple Storage Service (Amazon S3 )3 • Databases –Amazon DynamoDB4 • Interprocess messaging – Amazon Simple Notification Service ( Amazon SNS)5 and Amazon Simple Queue Service ( Amazon SQS)6 • Orchestration – AWS Step Functions7 and Amazon CloudWatch Events8 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 2 • Analytics – Amazon Kinesis9 This whitepaper will focus on AWS Lambda the compute layer of your serverless application where your code is executed and the AWS developer tools and services that enable best practices when building and maintaining serverless applications with Lambda AWS Lambda—the Basics Lambda is a high scale provision free serverless compute offering based on functions It provides t he cloud logic layer for your application Lambda functions can be trigg ered by a variety of events that occur on AWS or on supporting third party services They enabl e you to build reactive event driven systems When there are multiple simultaneous events to respond to Lambda simply runs more copies of the function in para llel Lambda functions scale precisely with the size of the workload down to the individual request Thus the likelihood of having an idle server or container is extremely low Architectures that use Lambda functions are designed to reduce wasted capacit y Lambda can be described as a type of serverless Function asaService (FaaS) FaaS is one approach to building event driven computing systems It relies on functions as the unit of deployment and execution Serverless FaaS is a type of FaaS where no virtual machines or containers are present in the programming model and where the vendor provides provision free scalability and built in reliability Figure 1 shows t he relationship among event driven computing FaaS and serverless FaaS This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 3 Figure 1: The relationship among event driven computing FaaS and serverless FaaS With Lambda you can run code for virtually any type of application or backend service Lambda run s and scale s your code with high availability Each Lambda function you create contains the code you want to execute the configuration that defines how your code is executed and optionally one or more event sources that detect events and invoke your function as they occur These elements are covered in more detail in the next section An example event source is API Gateway which can invoke a Lambda function anytime an API method created with API Gateway receives an HTTPS request Another example is Amazon SNS which has the ability to invoke a Lambda function anytime a new message is posted to an SNS topic Many event source options can trigger your Lambda function For the full list see this documentat ion10 Lambda also provide s a RESTful service API which includes the ability to directly invoke a Lambda function 11 You can use this API to execute your code directly without confi guring another event source You don’t need to write any code to integrate an event source with your Lambda function manage any of the infrastructure that detects events and delivers them to your function or manage scaling your Lambda function to match the number of events that are delivered You can focus on your application logic and configure the event sources that cause your logic to run This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 4 Your La mbda function runs within a (simplified) architecture that looks like the one shown in Figure 2 Figure 2: Simplified architecture of a running Lambda function Once you configure an event source for your function your code is invoked when the event occurs Your code can execute any business l ogic reach out to external web services integrate with other AWS services or anything else your application requires All of the same capabilities and software design principles that you’re used to for your language of choice will apply when using Lambd a Also because of the inherent decoupling that is enforced in serverless applications through integrating Lambda functions and event sources it ’s a natural fit to build microservices using Lambda functions With a basic understanding of serverless princ iples and Lambda you might be ready to start writing some code The following resources will help you get started with Lambda immediately : • Hello World tutorial: http://docsawsamazoncom/lambda/latest/dg/get started create functionhtml12 • Serverless workshops and walkthroughs for building sample applications: https://githubcom/awslabs/aws serverless workshops13 AWS Lambda—Diving Deeper The remainder of this whitepaper will help you understand the components and features of Lambda followed by best practices for various aspects of building and owning serverless applications using Lambda This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 5 Let’s begin our deep dive by further expanding and explaining each of the major components of Lambda that we described in the introduction: function code event sources and function configuration Lambda Function Code At its core you use Lambda to execute code This can be code that you’ ve written in any of the languages supported by Lam bda (Java Nodejs Python or C# as of this publication) as well as any code or packages you’ve uploaded alongside the code that you’ve written You’re free to bring any librari es artifacts or compiled native binaries that can execute on top of the runtime environment as part of your function code package If you want you can even execute code you’ve written in another programming language (PHP Go SmallTalk Ruby etc) as long as you stage and invoke that code from within one of the support languages in the AWS Lambda runtime environment (see this tutorial )14 The Lambda runtime environment is based on an Amazon Linux AMI (see current environment details here ) so you should compile and test the components you plan to run inside of Lambda within a matching environment15 To help you perform this type of testing prior to running within Lambda AWS provides a set of to ols called AWS SAM Local to enable local testing of Lambda functions16 We discuss these tools in the Serverless Development Best Practices section of this whitepaper The Function Code Package The function code package contains all of the assets you want to have available locally upon execution of your code A package will at minimum include the code function you want the Lambda se rvice to execute when your function is invoked However it might also contain other assets that your code will reference upon execution for example addition al files classes and libraries that your code will import binaries that you would like to execute or configuration files that your code might reference upon invocation The maximum size of a function code package is 50 MB compressed and 250MB extracted at the time of this publication (For the full list o f AWS Lambda l imits see this documentation 17) When you create a Lambda function (through the AWS Management Console or using the CreateFunction API) you can referenc e the S3 bucket and object This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 6 key where you’ve uploaded the package 18 Alternatively you can upload the code package directly when you create the function Lambda will then store your code package in an S3 bucket manage d by the service The same options are available when you publish updated code to existing Lambda functions (through the UpdateFunctionCode API)19 As events occur your code package will be downloaded from the S3 bucket installed in the Lambda runtime environment and invoked as needed This happens on demand at the scale required by the number of events triggering your function within an environm ent ma naged by Lambda The Handler When a Lambda function is invoked code execution begins at what is called the handler The handler is a specific code method (Java C#) or function (Nodejs Python) that you’ve created and included in your package You specify the handler when creating a Lambda function Each language supported by Lamb da has its own requirements for how a function handler can be defined and referenced within the package The following links will help you get started with each o f the supported languages Language Example Handler Definition Java20 MyOutput output handlerName(MyEvent event Context context) { } Nodejs21 exportshandlerName = function(event context callback) { // callback parameter is optional } Python22 def handler_name(event context): return some_value This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 7 Language Example Handler Definition C#23 myOutput HandlerName( MyEvent event ILambdaContext context) { } Once the handler is successfully invoked inside your Lambda f unction the runtime environment belongs to the code you’ve written Your Lambda function is free to execute any logic you see fit driven by the code you’ve written that starts in the handler This means you r handler can call other methods and functions within the files and classes you’ve uploaded Your code can import third party libraries that you’ve uploaded and install and execute native binaries that you’ve uploaded (as long as they can run on Amazon Linux ) It can also interact with other AWS services or make API requests to web ser vices that it depends on etc The Event Object When your Lambda function is invoked in one of the supported languages one of the parameters provided to your handler function is an event object The event differ s in structure and contents depending o n which event source created it The contents of the event parameter include all of the data and metadata your Lambda function needs to drive its logic For example an event created by API Gateway will contain details related to the HTTPS request that was made by the API client (for example path query st ring request body ) whereas an event created by Amazon S3 when a new object is created will include details about the bucket and the new object The Context Object Your Lambda function is also provided with a context object The context object allows your function code to interact with the Lambda execution environment The contents and structure of the context object vary based on the language runtime your Lambda function is using but at minimum it will contain: • AWS RequestId – Used to track specific invocations of a Lambda function (important for error reporting or when contacting AWS Support) This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 8 • Remaining time – The amount of time in milliseconds that remain befo re your function timeout occurs (Lambda functions can run a maximum of 300 seconds as of this publishing but you can configure a shorter timeout) • Logging – Each language runtime provides the ability to stream log statements to Amazon CloudWatch Logs T he context object contain s information about which C loudWatch Logs stream your log statements will be sent to For more information about how logging is handled in each language runtime see the following : o Java24 o Nodejs25 o Python26 o C#27 Writing Code for AWS Lambda —Statelessness and Reuse It’s important to understand the central tenant when writing code for Lambda: your code cannot make assumptions about stat e This is because Lambda fully manag es when a new function container will be created and invoked for the first time A container could be getting invoked for the first time for a number of reasons For example the events triggering your Lambda function a re increasing in concurrency beyond the number of containers previously created for your function an event is triggering your Lambda function for the first time in several minutes etc While Lambda is responsible for scaling your function containers up and down to meet actual demand your code needs to be able to operate accordingly Although Lambda won’t interrupt the processing of a specific invocation that’s already in flight your code doesn’t need to account for that level of volatility This mean s that your code cannot make any assumptions that state will be preserved from one invocation to the next However each time a function container is created and invoked it remain s active and available for subsequent invo cations for at l east a few minutes before it is terminated When subsequent invocations occur on a container that has already been active and invoked at least once before we say that invocation is running on a warm container When an invocation occurs for a Lambda function that requires your function code package to be created and invoked for the first time we say the invocation is experiencing a cold start This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 9 Figure 3: Invocations of warm function containers and cold function containers Depending on the logic your code is executing understanding how your code can take advantage of a warm container can result in faster code execution inside of Lambda This in turn results in quicker responses and lower cost For more details and examples of how to improve your Lambda function performance by taking advantage of warm containers see the Best Practices section later in this w hitepaper Overall each language that Lambda supp orts has its own model for packaging source code and po ssibilities for optimizing it V isit this page to get started with each of the supported languages28 Lambda Function Event Sources Now that you know what goes into the code of a Lambda function let’s look at the event sources or triggers that invoke your code While Lambda provide s the Invok e API that enables you to directl y invoke your function you will likely only use it for testing and operational purposes29 Instead you can associate your Lambda function with event sources occurring within AWS services that will invoke your function as needed You don’t have to write scale or maintain any of the software that integrates the event source with your Lambda function This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 10 Invocation Patterns There are two models for invoking a Lambda function : • Push Model – Your Lambda function is invoked every time a particular event occurs within another AWS service (for example a new object is added to an S3 bucket) • Pull M odel – Lambda poll s a data source and invoke s your function with any new records that arrive at the data source batching new records toge ther in a single function invocation (for example new records in an Amazon Kinesis or Amazon DynamoDB stream) Also a Lambda function can be executed synchronously or asynchronously You choose this using the parameter InvocationType that’s provided when invoking a Lambda function This parameter has three possible values: • RequestResponse – Execute synchronously • Event – Execute asynchronously • DryRun – Test that the invocation is permitted for the caller but don’t execute the function Each event source dictate s how your function can be invoked The event source is also responsible for crafting its own event parameter as we discussed earlier The following tables provide details about how some of the more popular event sources can integrate with your La mbda functions You can find the full list of supported event sources here 30 Push Model Event Source s Amazon S3 Invocation Model Push Invocation Type Event Description S3 event notifications (such as ObjectCreated and ObjectRemoved) can be configured to invoke a Lambda function as they are published This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 11 Example Use Cases Create image modifications (thumbnails different resolutions watermarks etc ) for images that users upload to an S3 bucket through your application Process raw data uploaded to an S3 bucket and move transformed data to another S3 bucket as part of a big data pipeline Amazon API Gateway Invocation Model Push Invocation Type Event or RequestResponse Description The API methods you create with API Gateway can use a Lambda function as their service backend If you choose Lambda as the integration type for an API method your Lambda function is invoked synchronously (the response of your Lambda function serve s as the API response) With this integration type API Gateway can also act as a simple proxy to a Lambda function API Gateway will perform no processing or transformation on its own and will pass along all the contents of the r equest to Lambda If you want an API to invoke your function asynchronously as an event and return immediately with an empty response you can use API Gateway as an AWS Service Proxy and integrate with the Lambda Invoke API providing the Event InvocationType in the request header This is a great option if your API clients don’t need any information back from the request and you want the fastest response time possible (This option is great for pushing user interactions on a website or app to a service backend for analysis ) Example Use Cases Web service backends (web application mobile app microservice architectures etc) Legacy service integration (a Lambda function to transform a legacy SOAP backend into a new modern REST API) This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 12 Any other use cases where HTTPS is the appropriat e integration mechanism between application components Amazon SNS Invocation Model Push Invocation Type Event Description Messages that are published to an SNS topic can be delivered as events to a Lambda function Example Use Cases Automated responses to CloudWatch alarms Processing of events from other services (AWS or otherwise) that can natively publish to SNS topics AWS CloudFormation Invocation Model Push Invocation Type RequestResponse Description As part of deploying AWS CloudFormation stacks you can specify a Lambda function as a custom resource to execute any custom commands and provide data back to the ongoing stack creation Example Use Cases Extend AWS CloudFormation capabilities to include AWS service features not yet natively supported by AWS CloudFormation Perform custom validation or reporting at key stages of the stack creation/update/delete process Amazon CloudWatch Events Invocation Model Push This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 13 Invocation Type Event Description Many AWS services publish resource state changes to CloudWatch Events Those events can then be filtered and routed to a Lambda function for automated responses Example Use Cases Event driven operations automation (for example take action each time a new EC2 instance is launched notify an appropriate mailing list when AWS Trusted Advisor reports a new status change) Replacement for tasks previously accomplished with cron (CloudWatch Events supports scheduled events) Amazon Alexa Invocation Model Push Invocation Type RequestResponse Description You can write Lambda f unctions that act as the service backend for Amazon Alexa Skills When an Alexa user interacts with your skill Alexa’s Natural Language Understand and Processing capabilities will deliver their interactions to your Lambda functions Example Use Cases An Alexa skill of your own Pull Model Event Source s Amazon DynamoDB Invocation Model Pull Invocation Type Request/Response Description Lambda will poll a DynamoDB stream multiple times per second and invoke your Lambda function with the batch of updates that have been published to the stream since the last batch You can configure the batch size of each invocation This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 14 Example Use Cases Application centric workflows that should be triggered as changes occur in a DynamoDB table (for example a new user registered an order was placed a friend request was accepted etc) Replication of a DynamoDB table to another region (for disaster recover y) or another service (shipping as logs to an S3 bucket for backup or analysis) Amazon Kinesis Streams Invocation Model Pull Invocation Type Request/Response Description Lambda will poll a Kinesis stream once per second for each stream shard and invoke your Lambda function with the next records in the shard You can define the batch size for the number of records delivered to your function at a time as well as the number of Lambda function containers executing concurrently (number of stream shards = number of concurrent function containers) Example Use Cases Realtime data processing for big data pipelines Realtime alerting/monitoring of streaming log statements or other application events Lambda Function Configuration After you write and package your Lambda function code on top of choosing which event sources will trigger your function you have various configuration options to set that define how your code is executed within Lambda Function Memory To define the resources allocated to y our executing Lambda function you’re provided with a single dial to increase/decrease function resources: memory/RAM You can allocate 128 MB of RAM up to 15 GB of RAM to your Lambda function Not only will this dictate the amount of memory available to This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 15 your function code during execution but the same dial will also influence the CPU and n etwork resources available to your function Selecting the appropriate m emory allocation is a very important step when optimizing the price and performance of any Lambd a function Please review the best practices later in this whitepaper for more specifics on optimizing performance Versions and Aliases There are times where you might need to reference or revert your Lambda function back to code that was previously deployed Lambda lets you version your AWS Lambda f unctions Each and every Lambda f unction has a default version built in: $LATEST You can address the most recent code that has been uploaded to your Lambda function through the $LATEST version You can ta ke a snapshot of the code that’s currently referred to by $LATEST and create a numbered version through the PublishVersion API31 Also when updating your function code thro ugh the UpdateFunctionCode API there is an optional Boolean parameter publish32 By setting publish: true in your request Lambda will create a new Lambda function version incremented from the last published version You can invoke each version of your Lambda function independently at any time Each version has its own Amazon Resource Name (ARN) referenced like this: arn:aws:lambda:[region]:[account] :function:[fn_name] :[version] When calling the Invoke API or creating an event source for your Lambda function you can also specify a specific version of the Lambda function to be executed33 If you don ’t provide a version number or use the ARN that doesn’t contain the version number $LATEST is invoked by default It’s important to know that a Lambda f unction container is specific to a particular version of your function So for example if there are already several function containers deployed and available in the Lambda runtime environment for version 5 of the f unction version 6 of the same function will not be able to execute on top of the existing version 5 containers —a different set of containers will be installed and managed for each function version This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 16 Invoking your Lambda functions by their version number s can be useful during testing and operational activities However we don’t recommend having your Lambda function be triggered by a specific version number for real application traffic Doing so would require you to update all of the triggers and clients invoking your Lambda function to point at a new function version each time you wanted to update your code Lambda aliases should be used here instead A function alias allows you to invoke and point event sources to a specific Lambda function version However you can update what version that alias refers to at any time For example your event sources and clients that are invoking version number 5 through the alias live may cut over to version number 6 of your function as soon as you update the live alias to instead point at version number 6 Each alias can be referred to within the ARN similar to when referring to a function version number: arn:aws:lambda:[region]:[account] :function:[fn_name] :[alias] Note : An alias is simply a pointer to a specific version number This means that if you have multiple different aliases pointed to the same Lambda function version at once requests to each alias are executed on top of the same set of installed function containers This is important to understand so that you don’ t mistakenly point multiple aliases at the same function version number if requests for each alias are intended to be processed separately Here are s ome example suggestions for Lambda aliases and how you might use them: • live/prod/active – This could represent the Lambda function version that your production triggers or that clients are integrating with • blue/green – Enable the blue/green deployment pattern through use of aliases • debug – If you’ve created a testing stack to debug your applications it can integrate with an alias like this when you need to perform a deeper analysis This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 17 Creating a good documented strategy for your use of function aliases en able s you to have sophisticated serverless deployment and operations practices IAM Role AWS Identity and Access Management (IAM) provides the capability to create IAM policies that define permissions for interacting with AWS s ervices and APIs34 Policies can be associated with IAM roles Any access key ID and secret access key generate d for a particular role is authorized to perform the actions defined in the policies attached to that role For more information about IAM best practices see this documentation 35 In the context of Lambda you assign an IAM role (called an execution role) to each of your Lambda functions The IAM p olicies attached to that role define what AWS s ervice APIs your function code is authorized to interact with There are t wo benefits: • Your source code is n’t required to perform any AWS credential management or rotation to interact with the AWS APIs Simply using the AWS SDKs and the default credential provider result s in your Lambda function automatically using temporary cre dentials associated with the execution role assigned to the function • Your source code is decoupled from its own security posture If a developer attempts to change your Lambda function code to integrate with a service that the function doesn’t have access to that integration will fail due to the IAM role assigned to your function (Unless they have used IAM credentials that are separate from the execution role you should use static code analysis tools to ensure that no AWS credentials are present in your source code) It’s important to assign each of your Lambda functions a specific separate and least privilege IAM role This strategy ensures that each Lambda f unction can evolve independently without increasing the authorization scope of any other Lambda functions Lambda Function Permissions You can define which push model event sources are allowed to invoke a Lambda function through a concept called permissions With permissions you declare a function policy that lists the AWS Resource Names (ARNs) that are allowed to invoke a function This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 18 For pull model event sources (for example Kinesis streams and DynamoDB streams) you need to ensure that the appropriate actions are permitted by the IAM execution role assig ned to your Lambda function AWS provides a set of managed IAM roles associated with each of the pull based event sources if you don’t want to manage the permissions required However to ensure least privilege IAM policies you should create your own IAM roles with resource specific policies to permit access to just the intended event source Network Configuration Executing your Lambda function occurs through the use of the Invoke API that is part of the AWS Lambda service API s; so there is no direct inbo und network access to your function to manage However y our function code might need to integrate with external dependencies (internal or publically hosted web services AWS services databases etc) A Lambda function has two broad options for outbound network connectivity: • Default – Your Lambda function communicate s from inside a virtual private cloud (VPC) that is managed by Lambda It can connect to the internet but not to any privately deployed resources running within your own VPCs • VPC – Your Lamb da function communicate s through an Elastic Network Interface (ENI) that is provisioned within the VPC and subnets you choose with in your own account These ENIs can be assigned security groups and traffic will route based on the route tables of the subne ts those ENIs are placed within —just the same as if an EC2 instance were placed in the same subnet If your Lambda function does n’t require connectivity to any privately deployed resources we recommend you select the d efault networking option Choosing the VPC option will require you to manage: • Selecting appropriate subnets to ensure multiple Availability Zones are being used for the purposes of high availability • Allocating the appropriate number of IP a ddresses to each subnet to manage capacity • Implementing a VPC network design that will permit your Lambda functions to have the connectivity and security required This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 19 • An increase in Lambda cold start times if your Lambda function invocation patterns require a new ENI to be created just in time (ENI creation can take many seconds today ) However if your use case requires private connectivity use the VPC option with Lambda F or deeper guidance if you plan to deploy your Lambda functions with in your own VPC see this documentation 36 Environment Variables Software Development Life Cycle (SDLC) best practice dictates that developers separate their code and their config You can achieve this by using environment variables with Lambda Environment variables for Lambda functions enable you to dynamically pass data to your function code and libraries without making changes to your code Environment variables are key value pairs that you create and modify as par t of your function configuration By default these variables are encrypted at rest For any sensitive information that will be stored as a Lambda function environment variable we recommend you encrypt those values using the AWS Key Management Service (AWS KMS) prior to function creation storing the encrypted cyphertext as the variable value Then have your Lambda function decrypt that variable in memory at execution time Here are some e xamples of how you might decide to use environment variables: • Log settings ( FATAL ERROR INFO DEBUG etc) • Dependency and/or database connection strings and credentials • Feature flags and toggles Each version of your Lambda f unction can have its own e nvironment variable values However once the values are established for a numbered Lambda funct ion version they cannot be changed To make changes to your Lambda function environment variables you can change them to the $LATEST version and then publish a new version that contains the new environment variable values This enables you to always keep track of which e nvironment variable values are associated with a previous version of your function This is often import ant during a rollback procedure or when triaging the past state of an application This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 20 Dead Letter Queues Even in the ser verless world exceptions can still occur (For example perhaps you’ve uploaded new function code that does n’t allow the Lambda event to be parsed successfully or there is an operational event within AWS that is preventing the function from being invoked ) For asynchronous event sources (the event InvocationType ) AWS owns the client software that is responsible for invoking your function AWS does not have the ability to synchronously notify you if the invocations are successful or not as invocations occur If an exception occurs when trying to invoke your function in these models the invocation will be attempted two more times (with back off between the retries) After the third attempt the event is either discarded or placed onto a dead letter queu e if you configured one for the function A dead letter queue is either an SNS topic or SQS queue that you have designated as the destination for all failed invocation events If a failure event occurs the use of a dead letter queue allow s you to retain just the messages that failed to be processed during the event Once your function is able to be invoked again you can target those failed events in the dead letter queue for reprocessing The mechanisms for reprocessing/retrying the function invocation attempts placed on to your dead l etter queue is up to you For more information about dead letter queues see this tutorial 37 You should use dead letter queues if it ’s important to your application that all invocations of your Lambda function complete eventually even if execution is delayed Timeout You can designate the maximum amount of time a single function execution is allowed to complete before a timeout is returned The maximum timeout for a Lambda function is 300 seconds at the time of this publication which means a single invocation of a Lambda function cannot execute longer than 300 seconds You should not always set the timeout for a Lambda function to the maximum There are many cases where an application should fail fast Because your Lambda function is billed based on execution time in 100 ms increments avoiding lengthy timeouts for functions can prevent you from being billed whil e a function is simply waiting to timeout (perhaps an external dependency is unavailable you’ve accidentally programmed an infinite loop or another similar scenario) Also once execution completes or a timeout occurs for your Lambda function and a respo nse is returned all execution ceases This includes any background This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 21 processes subprocesses or asynchronous processes that your Lambda function might have spawned during execution So you should not rely on background or asynchronous processes for critica l activities Your code should ensure those activities are completed prior to timeout or returning a response from your function Serverless Best Practices Now that we’ve covered the components of a Lambda based serverless application let’s cover some rec ommended best practices There are many SDLC and server based architecture best practices that are also true for serverless architectures : eliminate single points of failure test changes prior to deployment encrypt sensitive data etc However achieving best practices for serverless architectures can be a different task because of how different the operating model is You don ’t have access to or concerns about an operating system or any lower level components in the infrastructure Because of this your focus is solely on your own application code/architecture the development processes you follow and the features of the AWS services your application leverages that enable you to follow best practices First we review a set of best practices for designing your serverless architecture according to the AWS Well Architected Framework Then we cover some best practices and recommendations for your development process when building serverless applications Serverless Architecture Best Practices The AWS Well Architected Framework includes strategies to help you compare your workload against our best practices and obtain guidance to produce stable and eff icient systems so you can focus on functional requirements 38 It is based on five pillars: security reliability performance efficiency cost optimization and operational excellence Many of the guidelines in the framework apply to serverless applications However there are specific implementation steps or patterns that are unique to serverless architectures In the following sections we cover a set of recommendations that are serverless specific for each of the Well Architected pillars This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 22 Security Best Pr actices Designing and implementing security into your applications should always be priority number one —this doesn’t change with a serverless architecture The major difference for securing a serverless application compared to a server hosted application is obvious —there is no server for you to secure However you still need to think about your application ’s security There is still a shared responsibility model for serverless security With Lambda and serverless architectures rather than implementing application se curity through things like anti virus/malware software file integrity monitoring intrusion detection/prevention systems firewalls etc you ensure security best practices through writing secure application code t ight access control over source code changes and following AWS security best practices for each of the services that your Lambda functions integrate with The following is a brief list of serverless security best practices that should apply to many serverless use cases al though your own specific security and compliance requirements should be well understood and might include more than we describe here • One IAM R ole per Function Each and every Lambda function within your AWS a ccount should have a 1:1 rela tionship with an IAM role Even if multiple functions begin with exactly the same policy always decouple your IAM roles so that you can ensure least privilege policies for the future of your function For example if you shared the IAM role of a Lambda f unction that needed access to an AWS KMS key across multiple Lambda functions then all of those functions would now have access to the same encryption key • Temporary AWS Credentials You should not have any long lived AWS credentials included within your Lambda function code or configuration (This is a great use for static code analysis tools to ensure it never occurs in your code base!) For most cases the IAM execution role is all that’s required to integrate with other AWS services Simply create AWS service clients within your code through the AWS SDK without providing any credentials The SDK automatically manage s the retrieval and rotation of the temporary This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 23 credentials generated for your role The following is an example usin g Java AmazonDynamoDB client = AmazonDynamoDBClientBuilderdefaultClient(); Table myTable = new Table(client "MyTable"); This code snippet is all that’s required for the AWS SDK for Java to create an object for interacting with a DynamoDB table that automatically sign its requests to the DynamoDB APIs using the temporary IAM creden tials assigned to your function39 However t here might be cases where the execution role is not sufficient for the type of access your function requires This can be the case for some cross account integrations your Lambda function might perform or if you have user specific access control policies through com bining Amazon Cognito40 identity roles and DynamoDB fineg rained access control 41 For cross account us e cases you should grant your execution role should be granted access to the AssumeRole API within the AWS Security Token Service and integrate d to retrieve temporary access credentials 42 For user specific access control policies your function should be provided with the user identity in question and then integrate d with the Amazon Cognito API GetCredentialsForIdentity 43 In this case it’s imperative that you ensure your code appropriately manages these credentials so that you are leveraging the correct credentials for each user associated with that invocation of your Lambda function It’s common for an application to encrypt and store these per user credentials in a place like DynamoDB or Amazon ElastiCache as part of user session data so that they can be retrieved with reduced latency and more scalability than regenerating them for subsequent requests for a returning user44 • Persisting Secret s There are cases where you may have long lived secrets (for example database credentials dependency service access keys encryption keys etc) that your Lambda function needs to use We recommend a few options for the lifecycle of secrets management in your application : o Lambda Environment Variables with Encryption Helpers45 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 24 Advantages – Provided directly to your function runtime environment minimizing the latency and code required to retrieve the secret Disadvantages – E nvironment variables are coupled to a function version Updat ing an environment variable requires a new function version (more rigid but does provide stable version history as well) o Amazon EC2 Systems Manager Parameter Store46 Advantages – Fully decoupled from your Lambda functions to provide maximum flexibility for how secrets and functions relate to each other Disadvantag es – A request to Parameter Store is required to retrieve a parameter/secret While not substantial this does add latency over environment variables as well as an additional service dependency and requires writing slightly more code • Using Secrets Secret s should always only exist in memory and never be logged or written to disk Write code that manages the rotation of secrets in the event a secret needs to be revoked while your application remains running • API Authorization Using API Gateway as the event source for your Lambda function is unique from the other AWS service event source options in that you have ownership of authentication and authorization of your API clients API Gateway can perform much of the heavy lifting by providing things like native AWS SigV4 authentication 47 generated client SDKs 48 and custom authorizers 49 However you’re still responsible for ensuring that the security posture of your APIs meets the bar you’ve set For more information about API s ecurity best practices see this documentation 50 • VPC Security If your Lambda function requires access to resources deployed inside a VPC you should apply network security best practices through use of least privilege s ecurity groups Lambda function specific subnets network ACLs and route tables that allow traffic coming only from your Lambda functions to reach intended destinations This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 25 Keep in mind that these practices and policies impact the way that your Lambda functions connect to their dependencies Invoking a Lambda function still occurs through event sources an d the Invoke API (neither are affected by your VPC configuration) • Deployment Access Control A call to the UpdateFunctionCode API is analogous to a code deployment Moving an alias through the UpdateAlias API to that newly published version is analogous to a code release Treat access to the Lambda APIs that enable function code/aliases with extreme sensitivity As such you should eliminate direct user access to these APIs for any functions (production functions at a minimum) to remove the possibility of human error Making code changes to a Lambda function should be achieved through automation With that in mind the entry point for a deployment to Lambda become s the place where your continuous integration/continuous delivery ( CI/CD ) pipeline is initiated This may be a release branch in a repository an S3 bucket where a new code package is uploaded that triggers an AWS CodePipeline pipeline or somewhere else that’s specific to your organization and processes51 Wherever it is it becomes a new place where you should enforce stringent access control mechanisms that fit your team structure and roles Reliability Best Practices Serverless applications can be built to support mission critical use case s Just as with any mission critical application it’s important that you architect with the mindset that Werner Vogels CTO Amazoncom advocates for “E verything fails all the time” For serverless applications this could mean introducing logic bugs into your code failing application dependencies and other similar application level issues that you should try and prevent and account for using existing best practices that will still apply to your serverless applications For infrastructure level service ev ents where you are abstracted away from the event for serverless applications you should understand how you have architected your application to achieve high availability and fault tolerance High Availability High availability is important for productio n applications The availability posture of your Lambda function depends on the number of Availability Zones it can be executed in If your function uses the d efault network environment it is This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 26 automatically available to execute within all of the Availabili ty Zones in that AWS Region Nothing else is required to configure high availability for your function in the d efault network environment If your function is deployed within your own VPC the subnets (and their respective A vailability Zones) define if your function remains available in the event of an Availability Zone outage Therefore it’s important that your VPC design include s subnets in multiple Availability Zones In the event that an Availability Zone outage occurs it ’s important that your remaining subnets continue to have adequate IP addresses to support the number of concurrent functions required For information on how to calculate the number of IP addresses your functions require see this documentation 52 Fault Tolerance If the application availability you need requires you to take advantage of multiple AWS Regions you must take this into account up front in your design It’s not a complex exercise to replicate your Lambda function code package s to multiple AWS R egions What can be complex like most multi region application designs is coordinating a failover decision across all tiers of your application stack This means you need t o understand and orchestrate the shift to another AWS Region —not just for your Lambda functions but also for your event sources (and dependencies further upstream of your event sources) and persistence layers In the end a multi region architecture is very application specific The most important thing to do to make a multi region design feasible is to account for it in your design up front Recovery Consider how your serverless application should behave in the event that your functions cannot be exec uted For use cases where API Gateway is used as the event source this can be as simple as gracefully handling error messages and providing a viable if degraded user experience until your functions can be successfully executed again For asynchronous use cases it can be very important to still ensure that no function invocations are lost during the outage period To ensure that all received events are processed after your function has recovered you should take advantage of d ead letter queues and implement how to process events placed on that queue after recovery occurs This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 27 Performance Efficiency Best Practices Before we dive into performance best practices keep in mind that if your use case can be achieved asynchrono usly you might not need to be concerned with the performance of your function (other than to optimize costs) You can leverage one of the event sources that will use the e vent InvocationType or use the pull based invocation model Those methods alone might allow your application logic to proceed while Lambda continues to process the event separately If Lambda function execution time is something you want to optimize the execution duration of your Lambda function will be primarily impacted by three th ings (in order of simplest to optimize): the resources you allocate in the function configuration the language runtime you choose and the code you write Choosing the Optimal Memory Size Lambda provides a single dial to turn up and down the amount of com pute resources available to your function —the amount of RAM allocated to your function The amount of allocated RAM also impact s the amount of CPU time and network bandwidth your function receives Simply choosing the smallest resource amount that runs your function adequately fast is an anti pattern Because Lambda is billed in 100 ms increments this strategy might not only add latency to your application it might even be more expensive overall if the added latency outweighs the resource cost savings We recommend that you test your Lambda function at each of the available resource levels to determine what the optimal level of price/performance is for your application You’ll discover that the performance of your function should improve logarithmically as resource levels are increased The logic you’re executing will define the lower bound for function execution time T here will also be a resource threshold where any additional RAM/CPU/bandwidth available to your function no longer provide s any substantial performance gain However pricing increases linearly as the resource levels increase in Lambda Your tests should find where the logarithmic function bends to choose the optimal configuration for your function The following graph shows how the ideal me mory allocation to an example function can allow for both better cost and lower latency Here the additional compute cost per 100 ms for using 512 MB over the lower memory options is outweighed by the amount of latency reduced in the function by allocatin g more resources But after 512 MB the performance gains are diminished for this This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 28 particular function’s logic so the additional cost per 100 ms now drive s the total cost higher This leaves 512 MB as the optimal choice for minimizing total cost Figure 4: Choosing the o ptimal Lambda function memory s ize The memory usage for your function is determined per invo cation and can be viewed in CloudWat ch Logs 53 On each invo cation a REPORT: entry is made as shown below REPORT RequestId: 3604209a e9a311e6939a754dd98c7be3 Duration: 1234 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 18 MB By analyzing the Max Memory Used: field you can determine if your function needs more memory or if you over provisioned your function's memory size Language Runtime Performance Choosing a language runtime performance is obviously dependent on your level of comfort and skills with each of the supported runtimes But if performance is the driving consideration for your application the performance characteristics of each language are what you might expect on Lambda as you would in another runtime environment: the compiled languages (Java and NET) incur the largest initial startup cost for a container’s first invocation but show the best performance for subsequent invocations The interpreted languages (Nodejs and Python) have very fast initial invocation times compared to the compiled language s but can’t reach the same level of maximum performance as the compiled languages do This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 29 If your application use case is both very latency sensitive and susceptible to incurring the initial invocation cost frequently (very spiky traffic or very infrequent use) we recommend one of the interpreted languages If your application does not experience large peaks or valleys within its traffic patterns or does not have user experiences blocked on Lambda function response times we recomm end you choose the langua ge you’re already most comfortable with Optimizing Your Code Much of the performance of your Lambda function is dictated by what logic you need your Lambda function to execute and what its dependencies are We won’t cover what all those optimizations coul d be because they vary from application to application But there are some general best practices to op timize your code for Lambda These are related to taking advantage of container reuse ( as describes in the previous overview) and minimizing the initial cost of a cold start Here are a few examples of how you can improve the performance of your function code when a war m container is invoked: • After initial execution store and reference any externalized configuration or dependencies th at your code retrieves locally • Limit the reinitialization of variables/objects on every invocation (use global/static variables singletons etc) • Keep alive and reus e connections (HTTP database etc) that were established during a previous invocation Finally you should do the following to limit the amount of time that a cold start takes for your L ambda function: 1 Always use the d efault network en vironment unless connectivity to a resource within a VPC via private IP is required This is because there are additional cold start scenarios related to the VPC configuration of a Lambda function (related to creating ENIs within your VPC) 2 Choose an interpreted language over a compiled language This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 30 3 Trim your function code package to only its runtime necessities This reduce s the amount of time that it takes for your code package to be downloaded from Amazon S3 ahead of invocation Understanding Your Application Performance To get visibility into the various components of your application architecture which could include one or more Lambda functions we recommend that you use AWS X Ray54 XRay lets you trace the full lifecycle of an application request through each of its component parts showing the latency and other metrics of each component separately as shown in the following figure Figure 5: A service m ap visualized by AWS X Ray To learn more about X Ray see this documentation 55 Operational Excellence Best Practices Creating a serverless application removes many operational burdens that a traditional application bring s with it This doesn’t mean you should reduce your focus on operational excellence It means that you can narrow your operatio nal focus to a smaller number of responsibilities and hopefully achieve a higher level of operational excellence Logging Each language runtime for Lambda provides a mechanism for your function to deliver logged statements to CloudWatch Logs Making adequate use of logs goes without saying and isn’ t new to Lambda and serverless architectures Even though it ’s not considered best practice today many operational teams depend This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 31 on viewing logs as they are generated on top of the server an application is deployed on That simply isn’t possible with Lambda because there is no server You also do n’t have the ability to “step through” the code of a live running Lambda function today (althoug h you can do this with AWS SAM Local prior to deployment)56 For deployed functions y ou depend heavily on the logs you create to inform an investigation of function behavior Therefore it ’s especial ly important that the logs you do create find the right balance of verbosity to help track/triage issues as they occur without demanding too much additional compute time to create them We recommend that you make use of Lambda e nvironment variables to create a LogLevel variable that your function can refer to so that it can determine which log statements to create during runtime Appropriate use of log levels can ensure that you have the ability to selectively incur the additional compute co st and storage cost only during an operational triage Metrics and Monitoring Lambda just like other AWS services provides a number of CloudWatch metrics out of the box These include metrics related to the number of invocations a function has received the execution duration of a function and others It’s best practice to create alarm thresholds (high and low) for each of your Lambda functions on all of the provided metrics through CloudWatch A major change in how your function is invoked or how long i t takes to execute could be your first indication of a problem in your architecture For any additional metrics that your application needs to gather (for example application error codes dependency specific latency etc) you have two options to get those custom metrics stored in CloudWatch or your monitoring solution of choice: • Create a custom metric and integrate directly with the API required from your Lambda function as it ’s executing This has the fewest dependencies and will record the metric as fas t as possible However it does require you to spend Lambda execution time and resources integrating with another service dependency If you follow this path ensure that your code for captur ing metrics is modularized and reusable across your Lambda functi ons instead of tightly coupled to a specific Lambda function This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 32 • Capture the metric within your Lambda function code and log it using the provided logging mechanisms in Lambda Then create a CloudWatch Logs metric filter on the function streams to extract th e metric and make it available in CloudWatch Alternatively create another Lambda function as a subscription filter on the CloudWatch Logs stream to push filtered log statements to another metrics solution This path introduces more complexity and is not as near realtime as the previous solution for capturing metrics However it allow s your function to more quickly create metrics through logging rather than making an external service request Deployment Performing a deployment in Lambda is as simple as uploading a new function code package publishing a new version and updating your aliases However these steps should only be pieces of your deployment process with Lambda Each deployment process is application specific To design a deployment process that avoids negatively disrupting your users or application behavior you need to understand the relationship between each Lambda function and its event sources and dependencies Things to consider are: • Para llel version invocations – U pdating an alias to point to a new version of a Lambda function happen s asynchronously on the service side There will be a short period of time that existing function containers containing the previous source code package will continue to be invoked alongside the new function version the alias has been updated to It’s important that your application continues to operate as expected during this process An artifact of this might be that any stack dependencies being decommissioned after a deployment ( for example database tables a message queue etc) not be decommissioned until after you’ve observed all invocations targeting the new function version • Deployment schedule – Performing a Lambda function deployment during a peak traffic time could result in more cold start times than desired You should always perform your function deployments during a low traffic period to minimize the immediate impact of the new/cold function containers being provisioned in the Lambda environment • Rollback – Lambda provide s details about Lambda function versions (for example created time incrementing numbers etc ) However it does n’t logically track how your application lifecycle has been using those versions If you need to roll back your Lambda function code it’s This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 33 important for your processes to roll back to the function version that was previously deployed Load Testing Load test your Lambda function to determine an optimum timeout value It ’s important to analyze how long your function runs so that you can better determine any problems with a dependency service that might increase the concurrency of the function beyond what you expect This is especially important when your Lambda function makes network calls to resources that may not handle Lambda’ s scaling Triage and Debugging Both logging to enable investigations and us ing XRay to profile application s are useful to operational triages Additionally consider creating Lambda function aliases that represent operational activities such as integration testing performance testing debugging etc It’s common for teams to build out test suites or segmented application stacks that serve an operational purpose You should build these operational artifacts to also integrate with Lamb da functions via aliases However keep in mind that aliases don’t enforce a wholly separate Lambda function container So an alias like PerfTest that points at function version number N will use the same function containers as all other aliases pointing at version N You should define appropriate versioning and alias updating processes to ensure separate containers are invoked where required Cost Optimization Best Practices Because Lambda charges are based on function execution time and the resources allocated optimizing your costs is focused on optimizing those two dimensions Right Sizing As covered in Performance Efficiency it’s an anti pattern to assume that the smallest resource size available to your function will provide the lowest total cost If your function’s resource size is too small you could pay more due to a longer execution time than if more resources were avai lable that allowed your function to complete more quickly See the section Choosing the Optimal Memory Size for more details This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 34 Distributed and Asynchronous Architectures You don’t need to implement all use cases through a series of blocking/synchronous API requests and responses If you are able to design your application to be asynchronous you might find that each decoupled component of your architecture takes less compute time to conduct its work than tightly c oupled components that spend CPU cycles awaiting responses to synchronous requests Many of the Lambda event sources fit well with distributed systems and can be used to integrate your modular and decoupled functions in a more cost effective manner Batch Size Some Lambda event sources allow you to define the batch size for the number of records that are delivered on each function invocation ( for example Kinesis and DynamoDB) You should test to find the optimal number of records for each batch size so tha t the polling frequency of each event source is tuned to how quickly your function can complete its task Event Source Selection The variety of e vent sources available to integrate with Lambda means that you often have a variety of solution options availab le to meet your requirements Depending on your use case and requirements (request scale volume of data latency required etc) there might be a non trivial difference in the total cost of your architecture based on which AWS services you choose as the components that surround your Lambda function Serverless Development Best Practices Creating applications with Lambda can enable a development pace that you have n’t experienced before The amount of code you need to write for a working and robust serverle ss application will likely be a small percentage of the code you would need to write for a server based model But with a new application delivery model that serverless architectures enable there are new dimensions and constructs that your development pro cesses must make decisions about Things like organizing your code base with Lambda functions in mind moving code changes from a developer laptop into a production serverless environment and ensuring code quality through testing even though you can’t simulate the Lambda runtime environment or your event sources outside of AWS The following are some development centric best practices to help you work through these aspects of owning a serverless application This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 35 Infrastructure as Code – the AWS Serverless Application Model (AWS SAM) Representing your infrastructure as code brings many benefits in terms of the auditability automatability and repeatability of managing the creation and modification of infrastructure Even though you don’t need to manage any infrastructure when building a serverless application many components play a role in the architecture : IAM roles Lambda functions and their configurations their event sources and other dependencies Representing all of these things in AWS CloudFormation natively would require a large amount of JSON or YAML Much of it would be almost identical from one serverless application to the next The AWS Serverless Application Model ( AWS SAM) enables you to have a simple r experience when building serverless applications and get the benefits of infrastructure as code AWS SAM is an open specification abstraction layer on top of AWS CloudFormation 57 It provides a set of command line utilities that enable you to define a full serverless application stack with only a handful of lines of JSON or YAML package your Lambda function code together with that infrastructure definition and then deploy them together to AWS We recommend u sing AWS SAM combined with AWS CloudFormation to define and make changes to your serverless application environment There is a distinction however between changes that occur at the infrastructure/environment level and application code changes occurring within existing Lambda functions AWS CloudFormation and AWS SAM aren’t the only tools required to build a deployment pipeline for your Lambda function code changes See the CI/CD section of this whitepaper for more recommendations about managing code changes for your Lambda functions Local Testing – AWS SAM Local Along with AWS SAM AWS SAM Local offers additional command line tools that you can add to AWS SAM to test your serverless functions and applications locally before deploy ing them to AWS58 AWS SAM Local uses Docker to enable you to quickly test yo ur developed Lambda functions using popular event sources ( for example Amazon S3 DynamoDB etc) You can locally test an API you define in your SAM template before it is created in API Gateway You can also validate the AWS SAM template that you created By enabling these capabilities to run against Lambda functions still residing within your developer workstation you can do things like view logs locally step through your code in a This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 36 debugger and quickly iterate changes without having to deploy a new co de package to AWS Coding and Code Management Best Practices When developing code for Lambda functions there are some specific recommendations around how you should both write and organize code so that managing many Lambda functions does n’t become a complex task Coding Best Practices Depending on the Lambda runtime language you build with continue to follow the best practices already established for that language While the environment that surrounds how your code is invoked has changed wi th Lambda the language runtime environment is the same as anywhere else C oding standards and best practices still apply The following recommendations are specific to writing code for Lambda outside of those general best practices for your language of c hoice Business Logic outside the Handler Your Lambda function starts execution at the handler function you define within your code package Within your handler function you should receive the parameters provide d by Lambda pass those parameters to another function to parse into new variables/objects that are contextualized to your application and then reach out to your business logic that sits outside the handler function and file This enables you to create a code package that is as decoupled from the Lambda runtime environment as possible This will greatly benefit your ability to test your code within the context of objects and functions you’ve created and reuse the business logic you’ve written in other environments outs ide of Lambda The following example (written in Java ) shows poor practices where the core business logic of an application is tightly coupled to Lambda In this example the business logic is created within the handler method and depend s directly on Lamb da event source objects This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 37 Warm Container s—Caching/Keep Alive/Reuse As mentioned earlier you should write code that take s advantage of a warm function container This means scoping your variables in a way that they and their contents can be reused on subsequent invocation s where possible This is especially impactful for things like bootstrapping configuration keeping exter nal dependency connections open or one time initialization of large objects that can persist from one invocation to the next Control Dependencies The Lambda execution environment contains many libraries such as the AWS SDK for the Nodejs and Python runt imes (For a full list see the Lambda Execution Environment and Available Libraries 59) To enable the latest set of features and security updates Lambda periodically update s these libraries These updates can introduce subtle changes to the behavior of your Lambda function To have full control of the dependencies your function uses we recommend packaging all your dependencies with your deployment package Trim Dep endencies Lambda function code package s are permitted to be at most 50 MB when compressed and 250 MB when extracted in the runtime environment If you are including large dependency artifacts with your function code you may need to This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 38 trim the dependencies included to just the runtime essentials This also allow s your Lambda function code to be downloaded and installed in the runtime environment more quickly for cold starts Fail Fast Configure reasonably short timeouts for any external dependencies as well as a reasonably short overall Lambda function timeout Don’t allow your function to spin helplessly while waiting for a dependency to respond Because Lambda is billed based on the duration of your function execution you don’t want to incur higher charges than necessary when your function dependencies are unresponsive Handling Exceptions You might decide to throw and handle exceptions differently depending on your use case for Lambda If you ’re placing an API Gateway API in front of a Lambda function yo u may decide to throw an exception back to API Gateway where it might be transformed based on its contents into the appropriate HTTP status code and message for the error that occurred If you ’re building an asynchronous data processing system you might decide that some exceptions within your code base should equate to the invocation moving to the dead letter queue for reprocessing while other errors can just be logged and not placed on the dead letter queue You should evaluate what your decide failure behaviors are and ensure that you are creating and throwing the correct types of exceptions within your code to achieve that behavior To learn more about handling exceptions see the following for details about how exceptions are defined for each languag e runtime environment: • Java60 • Nodejs61 • Python62 • C#63 Code Management Best Practices Now that the code you’ve written for your Lambda functions follows best practices how should you manage that code? With the development speed that Lambda enables you might be able to complete code changes at a pace that is unfamili ar for your typical pro cesses And the reduced amount of code that serverless architectures require means that your Lambda function code This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 39 represents a large portion of what makes your entire application stack function So having good source code management of your Lambda function code will help ensure secure efficient and smooth change management processes Code Repository Organization We recommend that you organize your Lambda function source code to be very fine grained within your source code management solution of choice This usually means having a 1:1 relationship between Lambda functions and code repositories or repository projects (The lexicon differ s from one source code management tool to another ) However if you are following a strategy of creating separate Lambda f unctions for different lifecycle stages of the same logical function ( that is you have two Lambda functions one called MyLambdaFunction DEV and another called MyLambdaFunction PROD) it make s sense to have those separate Lambda functions share a code bas e (perhaps deploying from separate release branches) The main purpose of organizing your code this way is to help ensure that all of the code that contribute s to the code package of a particular Lambda function is independently versioned and committed to and define s its own dependencies and those dependencies’ versions Each Lambda function should be fully decoupled from a source code perspective from other Lambda functions just as it will be when it’s deployed You don’t want to go through the process of modernizing an application architecture to be modular and decoupled with Lambda only to be left with a monolithic and tightly coupled code base Release Branches We recommend that you create a repository or project branch ing strategy that enables you to correlate Lambda function deployments with incremental commits on a release branch If you don’t have a way to confidently correlate source code changes within your repository and the changes that have been deployed to a live Lambda function an operational investigation will always begin with trying to identify which version of your code base is the one currently deployed You should build a CI/CD pipeline (more recommendations for this later ) that allows you to correlate L ambda code package creation and deployment times with the code ch anges that have occurred with your release branch for that Lambda function This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 40 Testing Time spent developing thorough testing of your code is the best way to ensure quality within a serverless architecture However serverless architectures will enforce proper unit testing practices perhaps more than you ’re used to Many developers use u nit test tools and frameworks to write tests that cause their code to also test its dependencies This is a si ngle test that combines a unit test and an integration test but that doesn’t perform either very well It’s important to scope all of your u nit test cases down to a single code path within a single logical function mocking all inputs from upstream and ou tputs from downstream This allows you to isolate your test cases to only the code that you own When writing unit tests you can and should assume that your dependencies behave properly based on the contracts your code has with them as APIs libraries etc It’s similarly important for your integration tests to test the integration of your code to its dependencies in an environment that mimics the live environment Testing whether a developer laptop or build server can integrate with a downstream dependency is n’t fully testing if your code will integrate successfully once in the live environment This is especially true of the Lambda environment where you code does n’t have ownership of the events that are going to be delivered by event sources and you do n’t have the ability to create the Lambda runtime environment outside of Lambda Unit Tests With what we’ve said earlier in mind we recommend that you u nit test your Lambda function code thoroughly focusing mostly on the business logic outside your handler function You should also unit test your ability to parse sample/mock objects for the event sources of your function However the bulk of your logic and tests should occur with mocked objects and functions that you have full control over within your code base If you feel that there are important things inside your h andler function that need to be unit tested it can be a sign you should encapsulate and externalize the logic in your handler function further Also to supplement the unit tests you’ve written you should create local test automation using AWS SAM Local that can serve as local end toend testing of your function code (note that this isn’t a replacement for unit testing) This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 41 Integration Testing For integration tests we recommend that you create lower lifecycle versions of your Lambda functions where your code packages are deployed and invoked through sample events that your CI/CD pipeline can trigger and inspect the results of (Implementation depends on your application and architecture ) Continuous Delivery We recommend that you programmatically manage all of your serverless deployments through CI/CD pipelines This is because the speed with which you will be able to develop new features and push code changes with Lambda will allow you to deploy much more frequently Manual deployments combined with a need to deploy more frequently often result in both the manual process becoming a bottleneck and prone to error The capabilities provided by AWS CodeCommit AWS CodePipeline AWS CodeBu ild AWS SAM and AWS CodeStar provide a set of capabilities that you can natively combine into a holistic and automated serverless CI/CD pipeline (where the pipeline itself also has no infras tructure that you need to manage) Here is how each of these services play s a role in a well define d continuous delivery strategy AWS CodeCommit – Provides hosted private Git repositories that will enable you to host your serverless source code create a branching strategy that meets our recommendations (including f inegrained access control) and integrate with AWS CodePipeline to trigger a new pipeline execution when a new commit occurs in your release branch AWS CodePipeline – Defines the steps in your pipeline Typically a n AWS CodePipeline pipeline begins where your source code changes arrive Then you execute a build phase execute tests against your new build and perform a deployment and release of your build into the live environment AWS CodePipeline provides native integration options for each of these phases with other AWS services AWS CodeBuild – Can be used for the build state of your pipeline U se it to build your code execute unit tests and create a new Lambda code package Then integrate with AWS SAM to push your code package to Amazon S3 and push the new package to Lambda via AWS CloudFormation This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 42 After your new version is published to your Lambda f unction through AWS CodeBuild you can automate your subsequent steps in your AWS CodePipeline pipeline by creating deployment centric Lambda functions They will own the logic for performing integration tests updating function aliases determining if immediate rollbacks are necessary and any other application centric steps needed to occur during a deployment for your application (like cache f lushes notification messages etc) Each one of these deployment centric Lambda functions can be invoked in sequence as a step within your AWS CodePipeline pipeline using the Invoke action For details on using Lambda within AWS CodePipeline see this documentation 64 In the end each application and organization has its own requirements for moving source code from repository to production The more automation you can introduce into this process the more agility you can achieve using Lambda AWS CodeStar – A unified user interface for creating a serverless application (and other types of applications) that helps you follow these best practices from the beginning When you create a new project in AWS CodeStar you automatically begin with a fully implemented and integrated continuous delivery toolchain (using AWS CodeCommit AWS CodePipeline and AWS CodeBuild services mentioned earlier ) You will also have a place where you can manage all aspects of the SDLC for your project including team member management issue tracking development deployment and operations For more information about AWS CodeStar go here 65 Sample Serverless Architectures There are a number of sample serverless architectures and instructions for recreating them in your own AWS account You can find them on GitHub 66 Conclusion Building serverless applications on AWS relieves you of the responsibilities and constraints that servers introduce Using AWS Lambda as your serverless logic layer enables you to build faster and focus your development efforts on what differentiates your application Alongside Lambda AWS provides additional serverless capabilities so that you can build robust performant event driven reliable secure and cost effective applica tions Understanding the capabilities and recomm endations described in this w hitepaper can help ensure your This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 43 success when building serverless applications of your own To learn more on related topics see Serverless Computing and Applications 67 Contributors The fo llowing individuals and organizations contributed to this document: • Andrew Baird Sr Solutions Architect AWS • George Huang Sr Product Marketing Manager AWS • Chris Munns Sr Developer Advocate AWS • Orr Weinstein Sr Product Manager AWS 1 https://awsamazoncom/lambda/ 2 https://awsamazoncom/api gateway/ 3 https://awsamazoncom/s3/ 4 https://awsamazoncom/dynamodb/ 5 https://awsamazoncom/sns/ 6 https://awsamazoncom/sqs/ 7 https://awsamazoncom/step functions/ 8 https://docsawsa mazoncom/AmazonCloudWatch/latest/events/WhatIsCloud WatchEventshtml 9 https://awsamazoncom/kinesis/ 10 http://docsawsamazoncom/lambda/latest/dg/invoking lambda functionhtml 11 http://docsawsamazoncom/lambda/latest/dg/API_Invokehtml 12 http://docsawsamazoncom/lambda/latest/dg/get started create functionhtml 13 https://githubcom/awslabs/aws serverless workshops Notes This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 44 14 https://awsamazoncom/blogs/compute/scripting languages foraws lambda running phpruby and go/ 15 http://docsawsamazoncom/lambda/latest/dg/current supported versionshtml 16 https://githubcom/awslabs/aws sam local 17 http://docsawsamazoncom/lambda/latest/dg/limitshtml 18 http://docsawsamazoncom/lambda/latest/dg/API_CreateFunctionhtml 19 http://docsawsamazoncom/lambda/latest/dg/API_UpdateFunctionCodeht ml 20 http://docsawsamazoncom/lambda/latest/dg/java programming modelhtml 21 http://docsawsamazoncom/lambda/latest/dg/programming modelhtml 22 http://docsawsamazoncom/lambda/latest/dg/python programming modelhtml 23 http://docsawsamazoncom/lambda/latest/dg/dotnet programming modelhtml 24 http://docsawsamazoncom/lambda/latest/dg/java logginghtml 25 http://docsawsamazoncom/lambda/latest/dg/nodejs prog model logginghtml 26 http://docsawsamazoncom/lambda/latest/dg/python logginghtml 27 http://docsawsamazoncom/lambda/latest/dg/dotnet logginghtml 28 http://docsawsamazoncom/lambda/latest/dg/programming model v2html 29 http://docsawsamazoncom/lambda/latest/dg/API_Invokehtml 30 http://docsawsamazoncom/lambda/latest/dg/invoking lambda functionhtml 31 http://docsawsamazoncom/lambda/latest/dg/API_PublishVersionhtml 32 http://docsawsamazoncom/lambda/latest/dg/API_UpdateFunctionCodeht ml 33 http://docsawsamazoncom/lambda/latest/dg/API_Invokehtml This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 45 34 http://docsawsamazoncom/IAM/latest/UserGuide/access_policieshtml 35 http://docsawsamazoncom/IAM/latest/UserGuide/best practiceshtml 36 http://docsawsamazoncom/lambda/latest/dg/vpchtml 37 https://awsamazoncom/blogs/compute/robust serverless application design with awslambda dlq/ 38 http://d0awsstaticcom/whitepapers/architecture/AWS_Well Architected_Frameworkpdf 39 https://awsamazoncom/sdk forjava/ 40 https://awsamazoncom/cognito/ 41 http://docsawsamazoncom/amazondynamodb/latest/developerguide/speci fying conditionshtml 42 http://docsawsamazoncom/cognitoidentity/latest/APIReference/API_GetC redentialsForIdentityhtml 44 https://awsamazoncom/elasticache/ 45 http://docsawsamazoncom/lambda/latest/dg/env_variableshtml#env_enc rypt 46 http://docsawsamazoncom/systems manager/latest/userguide/systems manager paramstorehtml 47 http://docsawsamazoncom/general/latest/gr/signature version 4html 48 http://docsawsamazoncom/apigateway/latest /developerguide/how to generate sdkhtml 49 http://docsawsamazoncom/apigateway/latest/developerguide/use custom authorizerhtml 50 http://docsawsamazoncom/apigateway/latest/developerguide/apigateway control access toapihtml 51 https://aw samazoncom/codepipeline/ This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Serverless Architectures with AWS Lambda Page 46 52 http://docsawsamazoncom/lambda/latest/dg/vpchtml#vpc setup guidelines 53 http://docsawsamazoncom/AmazonCloudWatch/latest/monitoring/WhatIs CloudWatchLogshtml 54 http://docsawsamazoncom/lambda/latest/dg/lambdax rayhtml 55 http://docsawsamazoncom/lambda/latest/dg/lambdax rayhtml 56 https://githubcom/awslabs/serverless application model 57 https://githubcom/awslabs/serverless application model 58 https://githubcom/awslabs/aws sam local 59 http://docsawsamazoncom/lambda/latest/dg/current supported versionshtml 60 http://docsawsamazoncom/lambda/latest/dg/java exceptionshtml 61 http://docsawsamazoncom/lambda/latest/dg/nodejs progmode exceptionshtml 62 http://docsawsamazoncom/lambda/latest/dg/python exceptionshtml 63 http://docsawsamazoncom/lambda/latest/dg/dotnet exceptionshtml 64 http://docsawsamazoncom/codepipeline/latest/userguide/actions invoke lambda functionhtml 65 https://awsamazoncom/codestar/ 66 https://githubcom/awslabs/aws serverless workshops 67 https://awsamazoncom/serverless/
General
Encrypting_Data_at_Rest
ArchivedEncrypting Data at Rest Ken Beer Ryan Holland November 2014 https://awsamazoncom/security/securitylearningThis paper has been archived For the latest security information see the AWS Cloud Security Learning page on the AWS website at:ArchivedArchivedArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 2 of 20 Contents Contents 2 Abstract 2 Introduction 2 The Key to Encry ption: Who Controls the Keys? 3 Model A: You control the encryption method and the entire KMI 4 Model B: You control the encryption method; AWS provides the storage component of the KMI while you provide the management layer of the KMI 11 Model C: AWS controls the encryption method and the entire KMI 12 Conclusion 17 References and Further Reading 19 Abstract Organizational policies or industry or government regulations might require the use of encryption at rest to protect your data The flexible nature of Amazon Web Services (AWS) allows you to choose from a variety of different options that meet your needs This whitepaper provides an overview of different methods for encrypting your data at rest available today Introduction Amazon Web Services (AWS) delivers a secure scalable cloud computing platform with high availability offering the flexibility for you to build a wide range of applications If you require an additional layer of security for the data you store in the cloud there are several options for enc rypting data at rest —ranging from completely automated AWS encryption solutions to manual client side options Choosing the right solutions depends on which AWS service you’re using and your requirements for key management This white paper provides an overview of various methods for encrypting data at rest in AWS Links to additional resources are provided for a deeper understanding of how to actually implement the encryption methods discussed ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 3 of 20 The Key to Encryption: Who Controls the Keys? Encryption on a ny system requires three components: ( 1) data to encrypt; (2 ) a method to encrypt the data using a cryptographic algorithm; and ( 3) encryption keys to be used in conjunction with the data and the algorithm Most modern programming languages provide libraries with a wide range of available cryptographic algorithms such as the Advanced Encryption Standard (AES) Choosing the right algorithm involves evaluating security performance and compliance requirements specific to your application Although the selection of an encryption algorithm is important protecting the keys from unauthorized access is critical Managing the security of encryption keys is often performed using a key management infrastructure (KMI) A KMI is composed of two sub components: the st orage layer that protects the plaintext keys and the management layer that authorizes key usage A common way to protect keys in a KMI is to use a hardware security module (HSM) An HSM is a dedicated storage and data processing device that performs cryptographic operations using keys on the device An HSM typically provides tamper evidence or resistance to protect keys from unauthorized use A software based authorization layer controls who can administer the HSM and which users or applications can use which keys in the HSM As you deploy encryption for various data classifications in AWS it is important to understand exactly who has access to your encryption keys or data and under what conditions As shown in Figure 1 t here are three different models for how you and/or AWS provide the encryption method and the KMI • You control the encryption method and the entire KMI • You control the encryption method AWS provides the storage component of the KMI and you provide the management layer of the KMI • AWS c ontrols the encryption method and the entire KMI Figure 1: Encryption models in AWS ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 4 of 20 Model A: You control the encryption method and the entire KMI In this model you use your own KMI to generate store and manage access to keys as well as control all encryption methods in your applications This physical location of the KMI and the encryption method can be outside of AWS or in an Amazon Elastic Compute Cloud (Amazon EC2) instance you own The encryption method can be a combination of open source tools AWS SDKs or third party software and/or hardware The important security property of this model is that you have full control over the encryption keys and the execution environment that utilizes those key s in the encryption code AWS has no access to your keys and cannot perform encryption or decryption on your behalf You are responsible for the proper storage management and use of keys to ensure the confidentiality integrity and availability of your data Data can be encrypted in AWS services as described in the following sections Amazon S3 You can encrypt data using any encryption method you want and then upload the encrypted data using the Amazon Simple Storage Service (Amazon S3) API Most common application languages include cryptographic libraries that allow you to perform encryption in your applications Two commonly available open source tools are Bouncy Castle and OpenSSL After you have encrypted an object and safely stored the key in your KMI the encrypted object can be uploaded to Amazon S3 directly with a PUT request To decrypt this data you issue the GET request in the Amazon S3 API and then pass the e ncrypted data to your local application for decryption AWS provides an alternative to these open source encryption tools with the Amazon S3 encryption client which is an open source set of APIs embedded into the AWS SDKs This client lets you supply a key from your KMI that can be used to encrypt or decrypt your data as part of the call to Amazon S3 The SDK leverages Java Cryptography Extensions (JCEs ) in your application to take your symmetric or asymmetric key as input and encrypt the object prior to uploading to Amazon S3 The process is reversed when the SDK is used to retrieve an object The downloaded encrypted object from Amazon S3 is passed to the client along with the key from your KMI The underlying JCE in your application decrypts the object The Amazon S3 encryption client is integrated into the AWS SDKs for Java Ruby and NET and it provides a transparent drop in replacement for any cryptographic code you might have used previously with your application that interacts with Amazon S3 Although AWS provides the encryption method you control the security of your data because you control the keys for that engine to use If you’re using the Amazon S3 encryption client on premises AWS never has access to your keys or unencrypted data If you’re using the client in an application running in Amazon EC2 a best practice is to pass keys to the client using secure transport (eg Secure Sockets Layer (SSL ) or Secure Shell (SSH )) from your KMI to help ensure confidentiality For more information ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 5 of 20 see the AWS SDK for Java documentation and Using Client Side Encryption in the Amazon S3 Developer Guide Figu re 2 shows how these two methods of client side encryption work for Amazon S3 data Figure 2: Amazon S3 client side encryption from on premises system or from within your Amazon EC2 application There are third party solutions available that can simplify the key management process when encrypting data to Amazon S3 CloudBerry Explorer PRO for Amazon S3 and CloudBerry Backup both offer a client side encryption option that applies a user defined password to the encryption scheme to protect files stored on Amazon S3 For programmatic encryption needs SafeNet ProtectApp for Java integrates with the SafeNet KeySecure KMI to provide client side encryption in your application The KeySecure KMI provides secure key storage and policy enforcement for keys that are passed to the ProtectApp Java client compatible with the AWS SDK The KeySecure KMI can run as an on premises appliance or as a virtual appliance in Amazon EC2 Figure 3 shows how the SafeNet solution can be used to encrypt data stored on Amazon S3 ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 6 of 20 Figure 3: Amazon S3 client side encryption from on premises system or from within your application in Amazon EC2 using SafeNet ProtectApp and SafeNet KeySecure KMI Amazon EBS Amazon Elastic Block Store (Amazon EBS) provides block level storage volumes for use with Amazon EC2 instances Amazon EBS volumes are network attached and persist independently from the life of an instance Because Amazon EBS volumes are presented to an instance as a block device you can leverage most standard encryption tools for file system level or block level encryption Some common block level open source encryption solutions for Linux are Loop AES dmcrypt (with or without) LUKS and TrueCrypt Each of these operates below the file system layer using kernel space d evice drivers to perform encryption and decryption of data These tools are useful when you want all data written to a volume to be encrypted regardless of what directory the data is stored in Another option would be to use file system level encryption w hich works by stacking an encrypted file system on top of an existing file system This method is typically used to encrypt a specific directory eCryptfs and EncFs are two Linux based open source examples of file system level encryption tools These solutions require you to provide keys either manually or from your KMI An important caveat with both block level and file system level encryption tools is that they can only be used to encrypt data volumes that are not Amazon EBS boot volumes This is becaus e these tools don’t allow you to automatically make a trusted key available to the boot volume at startup ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 7 of 20 Encrypting Amazon EBS volumes attached to Windows instances can be done using BitLocker or Encrypted File System (EFS) as well as open source applica tions like TrueCrypt In either case you still need to provide keys to these encryption methods and you can only encrypt data volumes There are AWS partner solutions that can help automate the process of encrypting Amazon EBS volumes as well as supplying and protecting the necessary keys Trend Micro SecureCloud and SafeNet ProtectV are two such partner products that encrypt Amazon EBS volumes and include a KMI Both products are able to encrypt boot volumes in addition to data volumes These solutions also support use cases where Amazon EBS volumes attach to auto scale d Amazon EC2 instances Figure 4 shows how the SafeNet and Trend Micro solutions can be used to encrypt data stored on Amazon EBS using keys managed on premises via software as a service ( SaaS) or in software running on Amazon EC2 Figure 4: Encryption in Amazon EBS using SafeNet ProtectV or Trend Micro SecureCloud AWS Storage Gateway AWS Storage Gateway is a service connecting an on premises software appliance with Amazon S3 It can be exposed to your network as an iSCSI disk to facilitate copying data from other sources Data on disk volumes attached to the AWS Storage Gateway will be automatically uploaded to Amazon S3 based on policy You can encrypt source data on the disk volumes using any of the file encryption methods described previously (eg Bouncy Castle or OpenSSL) before it reaches the disk You can also use a block level encryption tool (eg BitLocker or dm crypt/LUKS) on the iSCSI endpoint that AWS Storage Gateway exposes to encrypt all data on the disk volume Alternatively two AWS partner solutions Trend Micr o SecureCloud and SafeNet StorageSecure can perform ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 8 of 20 both the encryption and key management for the iSCSI disk volume exposed by AWS Storage Gateway These partners provid e an easy check box solution to both encrypt data and manage the necessary keys that is similar in design to how their Amazon EBS encryption solutions work Amazon RDS Encryption of data in Amazon Relational Database Service (Amazon RDS) using client side technology requires you to consider how you want data queries to work Because Amazon RDS doesn’t expose the attached disk it uses for data storage transparent disk encryption using techniques described in the previous Amazon EBS section are not available to you However selective encryption of database fields in your application can be done using any of the standard encryption libraries mentioned previously (eg Bouncy Castle OpenSSL) before the data is passed to your Amazon RDS instance While this specific field data would not easily support range queries in the database queries based on unencrypted fields can still return useful results The encrypted fields of the returned results can be decrypted by your local application for presentation To support more efficient querying of encrypted data you can store a keyed hash message authentication code (HMAC) of an encrypted field in your schema and you can supply a key for the hash function Subsequent queries of protected fields that contain the HMAC of the data being sought would not disclose the plaintext values in the query This allows the database to perform a query against the encrypted data in your database without disclosing the plaintext values in the query Any of the encryption methods you choose must be performed on your own application instance before data is sent to the Amazon RDS instance CipherCloud and Voltage Secur ity are two AWS partners with solutions that simplify protecting the confidentiality of data in Amazon RDS Both vendors have the ability to encrypt data using format preserving encryption (FPE) that allows ciphertext to be inserted into the database without bre aking the schema They also support tokenization options with integrated lookup tables In either case your data is encrypted or tokenized in your application before being written to the Amazon RDS instance These partners provide options to index and sear ch against databases with encrypted or tokenized fields The unencrypted or untokenized data can be read from the database by other applications without needing to distribute keys or mapping tables to those applications to unlock the encrypted or tokenized fields For example you could move data from Amazon RDS to the Amazon Redshift data warehousing solution and run queries against the non sensitive fields while keeping sensitive fields encrypted or tokenized Figure 5 shows how the Voltage solution can be used within Amazon EC2 to encrypt data before being written to the Amazon RDS instance The encryption keys are pulled from the Voltage KMI located in your data center by the Voltage Security client running on your applications on Amazon EC2 ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 9 of 20 Figure 5: Encrypting data in your Amazon EC2 applications before writing to Amazon RDS using Voltage SecureData CipherCloud for Amazon Web Services is a solution that works in a way that is similar to the way the Voltage Security client works for applications running in Amazon EC2 that need to send encrypted data to and from Amazon RDS CipherCloud provides a JDBC driver that can be installed on the application regardless of whether it’s running in Amazon EC2 or in your data center In addition the CipherCloud for Any App solution can be deployed as an inline gateway to intercept data as it is being sent to and from your Amazon RDS instance Figure 6 shows how the CipherCloud solution can be deployed this way to encrypt or tokenize data leaving your data center before being written to the Amazon RDS instance ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 10 of 20 Figure 6: Encrypting data in your data center before writing to Amazon RDS using CipherCloud Encryption Gateway Amazon EMR Amazon Elastic MapReduce (Amazon EMR) provides an easy touse Hadoop implementation running on Amazon EC2 Performing encryption throughout the MapReduce operation involves encryption and key management at four distinct points: 1 The source data 2 Hadoop Distributed File System (HDFS) 3 Shuffle phase 4 Output data If the source data is not encrypted th en this step can be skipped and SSL can be used to help protect data in transit to the Amazon EMR cluster If the source data is encrypted then your MapReduce job will need to be able to decrypt the data as it is ingested If your job flow uses Java and the source data is in Amazon S3 you can use any of the client decryption methods described in the previous Amazon S3 sections The storage used for the HDFS mount point is the ephemeral storage of the cluster nodes Depending on the instance type there m ight be more than one mount Encrypting these mount points requires the use of an Amazon EMR bootstrap script that will do the following: • Stop the Hadoop service • Install a file system encryption tool on the instance • Create an encrypted directory to mount the encrypted file system on top of the existing mount points • Restart the Hadoop service ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 11 of 20 You could for example perform these steps using the open source eCryptfs package and an ephemeral key generated in your code on each of the HDFS mounts You don’t need to worry about persistent storage of this encryption key because the data it encrypts does not persist beyond the life of the HDFS instance The shuffle phase involves passing data between cluster nodes before the reduce step To encrypt this data in transit you can enable SSL with a configure Hadoop bootstrap option when you create your cluster Finally to enable encryption of the output data your MapReduce job should encrypt the output using a key sourced from your KMI This data can be sent to Amazon S3 for storage in encrypted form Model B: You control the encryption method AWS provides the KMI storage component and you provide the KMI management layer This model is similar to Model A in that you manage the encryption method but it differs from Model A in that the keys are stored in an AWS CloudHSM appliance rather than in a key storage system that you m anage on premises While the keys are stored in the AWS environment they are inaccessible to any employee at AWS This is because only you have access to the cryptographic partitions within the dedicated HSM to use the keys The AWS CloudHSM appliance has both physical and logical tamper detection and response mechanisms that trigger zeroization of the appliance Zeroization erases the HSM’s volatile memory where any keys in the process of being decrypted were stored and destroys the key that encrypts stor ed objects effectively causing all keys on the HSM to be inaccessible and unrecoverable When you determine whether using AWS CloudHSM is appropriate for your deployment it is important to understand the role that an HSM plays in encrypting data An HSM can be used to generate and store key material and can perform encryption and decryption operations but it does not perform any key lifecycle management functions (eg access control policy key rotation) This means that a compatible KMI m ight be needed in addition to the AWS CloudHSM appliance before deploying your application The KMI you provide can be deployed either on premises or within Amazon EC2 and can communicate to the AWS CloudHSM instance securely over SSL to help protect data and encryption keys Because the AWS CloudHSM service uses SafeNet Luna appliances any key management server that supports the SafeNet Luna platform can also be used with AWS CloudHSM Any of the encryption options described for AWS services in Model A can work with A WS CloudHSM as long as the solution supports the SafeNet Luna platform This allows you to run your KMI within the AWS compute environment while maintaining a root of trust in a hardware appliance to which only you have access ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 12 of 20 Applications must be able to access your AWS CloudHSM appliance in an Amazon Virtual Private Cloud (Amazon VPC) The AWS CloudHSM client provided by SafeNet interacts with the AWS CloudHSM appliance to encrypt data from your application Encrypted data can then be sent to any AWS s ervice for storage Database disk volume and file encryption applications can all be supported with AWS CloudHSM and your custo m application Figure 7 shows how the AWS CloudHSM solution works with your applications running on Amazon EC2 in an Amazon VPC Figure 7: AWS CloudHSM deployed in Amazon VPC To achieve the highest availability and durability of keys in your AWS CloudHSM appliance we recommend deploying multiple AWS CloudHSM applications across Availability Zones or in conjunction with an on premises SafeNet Luna appliance that you manage The SafeNet Luna solution support s secure replication of keying material across appliances For more information see AWS CloudHSM on the AWS website Model C : AWS controls the encryption method and the entire KMI In this model AWS provides server side encryption of your data transparently managing the encryption method and the keys ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 13 of 20 AWS Key Management Service (KMS) AWS Key Management Service (KMS) is a manage d encryption service that lets you provision and use keys to encrypt your data in AWS services and your applications Master keys in AWS KMS are used in a fashion similar to the way master keys in an HSM are used After masters key are created they are designed to never be exported from the service Data can be sent into the service to be encrypted or decrypted under a specific master key under you account This design gives you centralized control over who can access your master keys to encrypt and decrypt data and it gives you the ability to audit this access AWS KMS is natively integrated with other AWS services including Amazon EBS Amazon S3 and Amazon Red shift to simplify encryption of your data within those services AWS SDKs are integrated with AWS KMS to let you encrypt data in your custom applications For applications that need to encrypt data AWS KMS provide s global availability low latency and a high level of durability for your keys Visit https://awsamazoncom/kms/ or download the KMS Cryptographic Details White Paper to learn more AWS KMS and other services that encrypt your data directly use a method ca lled envelope encryption to provide a balance between performance and security Figure 8 describes envelope encryption 1 A data key is generated by the AWS service at the time you request your data to be encrypted 2 Data key is used to encrypt your data 3 The data key is then encrypted with a key ­‐encrypting key unique to the service storing your data ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 14 of 20 4 The encrypted data key and the encrypted data are then stored by the AWS storage service on your behalf Figure 8: Envelope encryption The keyencrypting keys used to encrypt data keys are stored and managed separately from the data and the data keys Strict access controls are placed on the encryption keys designed to prevent unauthorized use by AWS employees When you need access to your pl aintext data this process is reversed The encrypted data key is decrypted using the key encrypting key; the data key is then used to decrypt your data The following AWS services offer a variety of encryption features to choose from Amazon S3 There are three ways of encrypting your data in Amazon S3 using server side encryption 1 Server side encryption: You can set an API flag or check a box in the AWS Management Console to have data encrypted before it is written to disk in Amazon S3 Each object is en crypted with a unique data key As an addit ional safeguard this key is encrypted with a periodically rotated master key managed by Amazon S3 Amazon S3 server side encryption uses 256 bit Advanced Encryption Standard (AES) keys for both object and master keys This feature is offered at no additional cost beyond what you pay for using Amazon S3 2 Server side encryption using customer provided keys: You can use your own encryption key while uploading an object to Amazon S3 This encryption key is used by Amazon S3 to encrypt your data using AES 256 After the object is encrypted the encryption key you supplied is deleted from the Amazon S3 system that used it to protect your data When you retrieve this object from Amazon S3 you must provide the same enc ryption key in your request Amazon S3 verifies that the encryption key matches decrypts the object and returns the object to you This feature is offered at no additional cost beyond what you pay for using Amazon S3 3 Server side encryption using KMS: You can encrypt your data in Amazon S3 by defining an AWS KMS master key within your account that you want to use to encrypt the unique object key (referred to as a data key in figure 8) that will ultimately encrypt your object When you upload your object a request is sent to KMS to create an object key KMS generates this object key and encrypts it using the master key ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 15 of 20 that you specified earlier; KMS then returns this encrypted object key along with the plaintext object key to Amazon S3 The Amazon S3 web server encrypts your object using the plaintext object key and stores the now encrypted object (with the encrypted object key) and deletes the plaintext object key from memory To retriev e this encrypted object Amazon S3 sends the encrypted object key to AWS KMS AWS KMS decrypts the object key using the correct master key and returns the decrypted (plaintext) object key to S3 With the plaintext object key S3 decrypts the encrypted object and returns it to you For pr icing of this option please refer to the AWS Key Management Service pricing page Amazon EBS When creating a volume in Amazon EBS you can choose to encrypt it using a n AWS KMS master key within your acc ount that wil l encrypt the unique volume key that will ultimately encrypt your EBS volume After you make your selection the Amazon EC2 server sends an authenticated request to AWS KMS to create a volume key AWS KMS generates this volume key encrypts it using the master key and returns the plaintext volume key and the encrypted volume key to the Amazon EC2 server The plaintext volume key is stored in memory to encrypt and decrypt all data going to and from your attached EBS volume When the encrypted volume (or any encrypted snapshots derived from that volume) needs to be reattached to an instance a call is made to AWS KMS to decrypt the encrypted volume key AWS KMS decrypts this encrypted volume key with the correct master key and returns the decrypted volume key to Amazon EC2 Amazon Glacier Before it’s written to disk d ata are always automatically encrypted using 256 bit AES keys unique to the Amazon Glacier service that are stored in separate systems under AWS control This feature is offered at no additional cost beyond what you pay for using Amazon Glacier AWS Storage Gateway The AWS Storage Gateway transfers your data to AWS over SSL and stores data encrypted at rest in Amazon S3 or Amazon Glacier using their respective server side encryption schemes Amazon EMR S3DistCp is an Amazon EMR feature that moves large amounts of data from Amazon S3 into HDFS from HDFS to Amazon S3 and between Amazon S3 buckets S3DistCp supports the ability to request Amazon S3 to use server side encryp tion when it writes EMR data to an Amazon S3 bucket you manage This feature is offered at no additional cost beyond what you pay for using Amazon S3 to store your Amazon EMR data ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 16 of 20 Oracle on Amazon RDS You can choose to license the Oracle Advanced Security option for Oracle on Amazon RDS to leverage the native Transparent Data Encryption (TDE) and Native Network Encryption (NNE) features The Oracle encryption module creates data and key encrypting keys to encrypt the database The key encrypting keys specific to your Oracle instance on Amazon RDS are themselves encrypted by a periodically rotated 256 bit AES master key This master key is unique to the Amazon RDS service and is stored in separate systems under AWS control Microsoft SQL Server on Amazo n RDS You can choose to provision Transparent Data Encryption (TDE) for Microsoft SQL Server on Amazon RDS The SQL Server encryption module creates data and key encrypting keys to encrypt the database The key encrypting keys specific to your SQL Server i nstance on Amazon RDS are themselves encrypted by a periodically rotated regional 256 bit AES master key This master key is unique to the Amazon RDS service and is stored in separate systems under AWS control This feature is offered at no additional cos t beyond what you pay for using Microsoft SQL Server on Amazon RDS Amazon Redshift When creating an Amazon Redshift cluster you can optionally choose to encrypt all data in user created tables There are three options to choose from for server side encry ption of an Amazon Redshift cluster 1 In the first option data blocks (included backups) are encrypted using random 256 bit AES keys These keys are themselves encrypted using a random 256 bit AES database key This database key is encrypted by a 256 bit AES cluster master key that is unique to your cluster The cluster master key is encrypted with a periodically rotated regional master key unique to the Amazon Redshift service that is stored in separate systems under AWS control This feature is offered at no additional cost beyond what you pay for using Amazon Redshift 2 With the second option the 256 bit AES cluster master key used to encrypt your database keys is generated in your AWS CloudHSM or by using a SafeNet Luna HSM appliance on premises This cluster master key is then encrypted by a master key that never leaves your HSM When the Amazon Redshift cluster starts up the cluster master key is decrypted in your HSM and used to decrypt the database key which is sent to the Amazon Redshift hosts to reside only in memory for the life of the cluster If the cluster ever restarts the cluster master key is again retrieved from your HSM —it is never stored on disk in plaintext This option lets you more tightly control the hierarchy and lifecycle of the keys used to encrypt your data This feature is offered at no additional cost beyond what you pay for using Amazon Redshift (and AWS CloudHSM if you choose that option for storing keys) ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 17 of 20 3 In the third option the 256 bit AES cluster master key used to encrypt your database keys is generated in AWS KMS This cluster master key is then encrypted by a master key within AWS KMS When the Amazon Redshift cluster starts up the cluster master key is decrypted in AWS KMS and used to decrypt the database key which is sent to the Amazon Redshift hosts to reside only in memory for the life of the cluster If the cluster ever restarts the cluster master key is again retrieved from the hardened security appliance in AWS KMS— it is never stored on disk in plaintext This option lets you define fine grained controls over the access and usage of your master keys and audit these controls through AWS CloudTrail For pricing of this option please refer to the AWS Key Manageme nt Service pricing page In addition to encrypting data generated within your Amazon Redshift cluster you can also load encrypted data into Amazon Redshift from Amazon S3 that was previously encrypted using the Amazon S3 Encryption Client and keys you provide Amazon Redshift supports the decryption and re encryption of data going between Amazon S3 and Amazon Redshift to protect the full lifecycle of your data These server side encryption features across multiple services in AWS enable you to easily encr ypt your data simply by making a configuration setting in the AWS Management Console or by making a CLI or API request for the given AWS service The authorized use of encryption keys is automatically and securely managed by AWS Because unauthorized ac cess to those keys could lead to the disclosure of your data we have built systems and processes with strong access controls that minimize the chance of unauthorized access and had these systems verified by third party audits to achieve security certifications including SOC 1 2 and 3 PCI DSS and FedRAMP Conclusion We have presented three different models for how encryption keys are managed and where they are used If you take all responsibility for the encryption method and the KMI you can have granu lar control over how your applications encrypt data However that granular control comes at a cost —both in terms of deployment effort and an inability to have AWS services tightly integrate with your applications’ encryption methods As an alternative yo u can choose a managed service that enables easier deployment and tighter integration with AWS cloud services This option offers check box encryption for several services that store your data control over your own keys secured storage for your keys and auditability on all data access attempts Table 1 summarizes the available options for encrypting data at rest across AWS We recommend that you determine which encryption and key management model is most appropriate for your data classifications in the context of the AWS service you are using ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 18 of 20 Encryption Method and KMI Model A Model B Model C AWS Service Client Side Solutions Using Customer Managed Keys Client Side Partner Solutions with KMI for Customer Managed Keys Client Side Solutions for Customer Managed Keys in AWS CloudHSM Server Side Encryption Using AWS Managed Keys Amazon S3 Bouncy Castle OpenSSL Amazon S3 encryption client in the AWS SDK for Java SafeNet ProtectApp for Java Custom Amazon VPCEC2 application integrated with AWS CloudHSM client Amazon S3 server side encryption server side encryption with customer provided keys or server side encryption with AWS Key Management Service Amazon Glacier N/A N/A Custom Amazon VPCEC2 application integrated with AWS CloudHSM client All data is automatically encrypted using server side encryption AWS Storage Gateway Linux Block Level: Loop AES dm crypt (with or without LUKS) and TrueCrypt Linux File System: eCryptfs and EncFs Windows Block Level: TrueCrypt Windows File System: BitLocker Trend Micro SecureCloud SafeNet StorageSecure N/A Amazon S3 server side encryption Amazon EBS Linux Block Level: Loop AES dm crypt+LUKS and TrueCrypt Linux File System: eCryptfs and EncFs Windows Block Level: TrueCrypt Windows File Syste m: BitLocker EFS Trend Micro SecureCloud SafeNet ProtectV Custom Amazon VPCEC2 application integrated with AWS CloudHSM client Amazon EBS Encryption with AWS Key Management Service Oracle on Amazon RDS Bouncy Castle OpenSSL CipherCloud Database Gateway and Voltage SecureData Custom Amazon VPCEC2 application integrated with AWS CloudHSM client Transparent Data Encryption (TDE) and Native Network Encryption (NNE) with optional Oracle Advanced Security license TDE for Microsoft SQL Serve r Microsoft SQL Server on Amazon RDS Bouncy Castle OpenSSL CipherCloud Database Gateway and Voltage SecureData Custom Amazon VPCEC2 application integrated with AWS CloudHSM client N/A Amazon Redshift N/A N/A Encrypted Amazon Redshift clusters with your master key managed in AWS CloudHSM or on premises Safenet Luna HSM Encrypted Amazon Redshift clusters with AWS managed master key Amazon EMR eCryptfs Custom Amazon VPCEC2 application integrated with AWS CloudHSM client S3DistCp using Amazon S3 server side encryption to protect persistently stored data ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 19 of 20 Table 1: Summary of data at rest encryption options References and Further Reading • Bouncy Castle Java crypto library http://wwwbouncycastleorg/ • OpenSSL crypto library http://wwwopensslorg/ • CloudBerry Explorer PRO for Amazon S3 encryption http://wwwcloudberrylabcom/amazon s3explorer procloudfront IAMaspx • Client Side Data Encryption with the AWS SDK for Java and Amazon S3 http://awsamazoncom/articles/2850096021478074 • SafeNet encryption products for Amazon S3 Amazon EBS and AWS CloudHSM http://wwwsafenet inccom/ • Trend Micro SecureCloud http://wwwtrendmicrocom/us/enterprise/cloud solutions/secure cloud/indexhtml • CipherCloud for AWS and CipherCloud for Any App http://wwwciphercloudcom/ • Voltage Security SecureData Enterprise http://wwwvoltagecom/products/securedata enterprise/ • AWS CloudHSM https://awsamazoncom/cloudhsm/ • AWS Key Management Service https://awsamazoncom/kms/ • Key Management Service Cryptographic Details White Paper https://d0awsstaticcom/whitepapers/KMS Cryptographic Detailspdf • Amazon EMR S3DistCp to encrypt data in Amazon S3 http://docsawsamazoncom/ElasticMapReduce/latest/DeveloperGuide/UsingEM R_s3distcphtml • Transparent Data Encryption for Oracle on Amazon RDS http://docsawsamazoncom/AmazonRDS/latest/UserGuide/AppendixOracleOp tionshtml#AppendixOracleOptionsAdvSecurity ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 20 of 20 • Transparent Data Encryption for Microsoft SQL Server on Amazon RDS http://docsawsamazoncom/AmazonRDS/latest/UserGuide/CHAP_SQLServerh tml#SQLServerConceptsGeneralOptions • Amazon Redshift encryption http://awsamazoncom/redshift/faqs/#0210 • AWS Security Bl og http://blogsawsamazoncom/security Document Revisions November 2013: First Version November 2014: • Introduced section on AWS Key Management Service (KMS) and Amazon EBS in Model C • Updated sections in Model C for Amazon S3 Amazon Redshift
General
Integrating_AWS_with_Multiprotocol_Label_Switching
Integrating AWS with Multiprotocol Label Switching December 2016 This paper has been archived For the latest technical content on this subject see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers Archived© 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedContents Introduction 1 Why Integrate with AWS? 1 Introduction to MPLS and Managed MPLS Services 2 Overview of AWS Networking Services and Core Technologies 3 Amazon VPC 3 AWS Direct Connect and VPN 3 Internet Gateway 4 Customer Gateway 5 Virtual Private Gateway and Virtual Routing and Forwarding 5 IP Addressing 5 BGP Protocol Overview 6 Autonomous System 6 AWS APN Partners – Direct Connect as a Service 8 Colocation with AWS Direct Connect 9 Benefits 9 Considerations 10 Architecture Scenarios 10 MPLS Architecture Scenarios 14 Scenario 1: MPLS Connectivity over a Single Circuit 14 Scenario 2: Dual MPLS Connectivity to a Single Region 22 Conclusion 28 Contributors 28 Further Reading 28 Notes 29 ArchivedAbstract This whitepaper outlines highavailability architectural best practices for customers who are considering integration between Amazon Virtual Private Cloud (Amazon VPC) in one or more regions with their existing Multiprotocol Label Switching (MPLS) network The whitepaper provides best practices for connecting single and/or multiregional configurations with your MPLS provider It also describes how customers can incorporate VPN backup for each of their remote offices to maintain connectivity to AWS Regions in the event of a network or MPLS outage The target audience of this whitepaper includes technology decision makers network architects and network engineers ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 1 Introduction Many midsized to largesized enterprises leverage Multiprotocol Label Switching (MPLS) services for their Wide Area Network (WAN) connection As cloud adoption increases companies seek ways to integrate AWS with their existing MPLS infrastructure in a costeffective way without redesigning their WAN architecture Companies want a flexible and scalable solution to bridge current onpremises data center workloads and their cloud infrastructure They also want to provide a seamless transition or extension between the cloud and their onpremises data center Why Integrate with AWS? There are a number of compelling business reasons to integrate AWS into your existing MPLS infrastructure:  Business continuity One of the benefits of adopting AWS is the ease of building highly available geographically separated workloads By integrating your existing MPLS network you can take advantage of native benefits of the cloud such as global disaster recovery and elastic scalability without losing any of your current architectural implementations standards and best practices  User data availability By keeping data closer to your users your company can improve workload performance customer satisfaction as well as meet regional compliance requirements  Mergers & acquisitions During mergers and acquisitions your company can realize synergies and improvements in IT services very quickly by moving acquired workloads into the AWS Cloud By integrating AWS into MPLS your company has the ability to: o Minimize or avoid costly and serviceimpacting data center expansion projects that can require either the relocation or purchase of technology assets o Migrate workloads into Amazon Virtual Private Cloud (Amazon VPC) to realize financial synergies very quickly while developing longerterm transformational initiatives to finalize the acquisition ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 2 To accomplish this companies can design their network with AWS to do the following:  Enable seamless transition of the acquired remote offices and data centers with AWS by connecting the newly acquired MPLS network to AWS  Simplify the migration of workloads from the acquired data center into an isolated Amazon VPC while maintaining connectivity to existing AWS workloads  Optimize availability and resiliency Enterprise customers who want to maximize availability and performance by using one or more WAN/MPLS solutions are able to continue with the same level of availability by peering with AWS in multiple faultisolated regions This whitepaper highlight s several options you have as a mid tolarge scale enterprise to cost effectively migrate and launch new services in AWS without overhauling and redesigning your current MPLS/WAN architecture Introduction to MPLS and Managed MPLS Services MPLS is an encapsulation protocol used in many service provider and large scale enterprise networks Instead of relying on IP lookups to discover a viable "nexthop" at every single router within a path (as in traditional IP networking) MPLS predetermines the path and uses a label swapping push pop and swap method to direct the traffic to its destination This gives the operator significantly more flexibility and enables users to experience a greater SLA by reducing latency and jitter For a simple overview of MPLS basics see RFC3031 Many service providers offer a managed MPLS solution that can be provisioned as Layer 3 (IPbased) or Layer 2 (single broadcast domain) to provide a logical extension of a customer’s network When referring to MPLS in this document we are referring to the service providers managed MPLS/WAN solution See the following RFCs for an overview on some of the most common MPLS ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 3 solutions:  L3VPN: https://toolsietforg/html/rfc4364 (obsoletes RFC 2547)  L2VPN (BGP): https://toolsietforg/html/rfc6624  Pseudowire (LDP): https://toolsietforg/html/rfc4447 Although AWS does not natively integrate with MPLS as a protocol we provide mechanisms and best practices to connect to your currently deployed MPLS/WAN via AWS Direct Connect and VPN Overview of AWS Networking Services and Core Technologies We want to provide a brief overview of the key AWS services and core technologies discussed in this whitepaper Although we assume you have some familiarity with these AWS networking concepts we have provided links to more indepth information Amazon VPC Amazon Virtual Private Cloud (Amazon VPC) is a logically isolated virtual network dedicated to your AWS account1 Within Amazon VPC you can launch AWS resources and define your IP addressing scheme This includes your subnet ranges routing table constructs network gateways and security setting Your VPC is a security boundary within the AWS multitenant infrastructure that isolates communication to only the resources that you manage and support AWS Direct Connect and VPN You can connect to your Amazon VPC over the Internet via a VPN connection by using any IPsec/IKEcompliant platform (eg routers or firewalls) You can set up a statically routed VPN connection to your firewall or a dynamically routed VPN connection to an onpremises router To learn more about setting up a VPN connection see the following resources:  http://docsawsamazoncom/AmazonVPC/latest/UserGuide/vpn connectionshtml ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 4  https://wwwyoutubecom/watch?v=SMvom9QjkPk Alternatively you can connect to your Amazon VPC by establishing a direct connection using AWS Direct Connect 2 Direct Connect uses dedicated private network connections between your intranet and Amazon VPC Direct Connect currently provides 1G and 10G connections natively and sub1G through Direct Connect Partners At the heart of Direct Connect is your ability to carve out logical virtual connections within the physical direct connect circuit based on the 8021Q VLAN protocol Direct Connect leverage virtual LANs (VLANs) to provide network isolations and enable you to create virtual circuits for different types of communication These logical virtual connections are then associated with virtual interfaces in AWS You can create up to 50 virtual interfaces across your direct connection AWS has a soft limit on the number of virtual interfaces you can create Using Direct Connect you can categorize VLANs that you create as either public virtual interfaces or private virtual interfaces Public virtual interfaces enable you to connect to AWS services that are accessible via public endpoints for example Amazon Simple Storage Service (Amazon S3) Amazon DynamoDB and Amazon CloudFront You can use private virtual interfaces to connect to AWS services that are accessible through private endpoints for example Amazon Elastic Compute Cloud (Amazon EC2) AWS Storage Gateway and your Amazon VPC Each virtual interface needs a VLAN ID interface IP address autonomous system number ( ASN ) and Border Gateway Protocol (BGP) key To learn more about working with Direct Connect virtual interfaces see http://docsawsamazoncom/directconnect/latest/UserGuide/WorkingWithVir tualInterfaceshtml Internet Gateway An Internet gateway (IGW) is a horizontally scaled redundant and highly available VPC component that allows communication between instances in your VPC and the Internet3 To use your IGW you must explicitly specify a route pointing to the IGW in your routing table ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 5 Customer Gateway A customer gateway (CGW) is the anchor on your side of the connection between your network and your Amazon VPC4 In an MPLS scenario the CGW can be a customer edge (CE) device located at a Direct Connect location or it can be a provider edge (PE) device in an MPLS VPN network For more information on which option best suits your needs see the Colocation section later in this document Virtual Private Gateway and Virtual Routing and Forwarding A virtual private gateway (VGW) is the anchor on the AWS side of the connection between your network and your Amazon VPC This software construct enables you to connect to your Amazon VPCs over an Internet Protocol Security (IPsec) VPN connection or with a direct physical connection You can connect from the CGW to your Amazon VPC using a VGW In addition you can connect from an onpremises router or network to one or more VPCs using a virtual routing and forwarding (VRF) approach5 VRF is a technology that you can use to virtualize a physical routing device to support multiple virtual routing instances These virtual routing instances are isolated and independent AWS recommends that you implement a VRF if you are connecting to multiple VPCs over a direct connection where IP overlapping and duplication may be a concern IP Addressing IP addressing is the bedrock of effective cloud architecture and scalable topologies Properly addressing your Amazon VPC and your internal network enables you to do the following:  Define an effective routing policy An effective routing policy enables you to associate adequate governance around what networks your infrastructure can communicate with internally and externally It also enables you to effectively exchange routes between and within domains systems and internal and external entities  Have a consistent and predictable routing infrastructure Your network should be predictable and fault tolerant During an outage or a ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 6 network interruption your routing policy ensures that routing changes are resilient and fault tolerant Use resources effectively By controlling the number of routes exchanged across the boundaries you prevent data packets from travelling across the entire network before getting dropped With proper IP addressing only segments with active hosts are propagated while networks without a host do not appear in your routing table This prevents unnecessary data charges when hosts are sending erroneous IP packets to systems that do not exist or that you choose not to communicate with Maintain security By effectively controlling which networks are advertised to and from your VPC you can minimize the impact of targeted denial of service attacks on subnets If these subnets are not defined within your VPC such attacks originating outside of your VPC will not impact your VPC Define a unique network IP address boundary in your VPC Amazon VPC supports IP address allocation by subnets which allows you to segment IP address spaces into defined CIDR ranges between /16 and /28 A benefit of segmentation is that you can sequentially assign hosts into meaningful blocks and segments while conserving your IP address allocations Amazon AWS also supports route summarization which you can use to aggregate your routes to control the number of routes into your VPC from your internal network The largest CIDR supported by Amazon VPC is a /16 You can aggregate your routes up to a /16 when advertising routes to AWS BGP Protocol Overview Autonomous System An autonomous system (AS) is a set of devices or routers sharing a single routing policy that run under a single technical administration An example is your VPC or data center or a vendor’s MPLS network Each AS has an identification number (ASN) that is assigned by an Internet Registry or a provider If you do not have an assigned ASN from the Internet Registry you can request one from your circuit provider (who may be able to allocate an ASN) or choose to assign a Private ASN from the following range: 65412 to 65535 ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 7 We recommend that you use Border Gateway Protocol (BGP) as the routing protocol of choice when establishing one or more Direct Connect connections with AWS For more information on why you should use BGP see http://docsawsamazoncom/directconnect/latest/UserGuide/Welcomehtml As an example AWS assigns an AS# of 7224 This AS# defines the autonomous system in which your VPC resides To establish a connection with AWS you have to assign an AS# to your CGW After communication is established between the CGW and the VGW they become external BGP peers and are considered BGP neighbors BGP neighbors exchange their predefined routing table (prefixlist) when the connection is first established and exchange incremental updates based on route changes Establishing neighbor relationships between two different ASNs is considered an External Border Gateway Protocol connection (eBGP) Establishing a connection between devices within the same ASN is considered an Internal Border Gateway Protocol connection (iBGP) BGP uses a TCP transport protocol port 179 to exchange routes between BGP neighbors Exchanging Routes between AWS and CGWs BGP uses ASNs to construct a vector graph of the network topology based on the prefixes exchanged between your CGW and VGW The connection between two ASNs forms a path and the collection of all these paths form a route used to reach a specific destination BGP carries a sequence of ASNs which indicate which routes are transversed To establish a BGP connection the CGW and VGW must be connected directly with each other While BGP supports BGP multihopping natively AWS VGW does not support multihopping All BGP neighbor connections have to terminate on the VGW Without a successful neighbor relationship BGP updates are not exchanged AWS does not support iBGP neighbor relationship between CGW and VGW AWSSupported BGP Metrics and Path Selection Algorithm The VGW receives routing information from all CGWs and uses the BGP best path selection algorithm to calculate the set of preferred paths The rules of that algorithm as it applies to VPC are: ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 8 1 The most specific IP prefix is preferred (for example 10000/24 is preferable to 10000/16) For more information see Route Priority in the Amazon VPC User Guide 6 2 When the prefixes are the same statically configured VPN connections (if they exist) are preferred 3 For matching prefixes where each VPN connection uses BGP the algorithm compares the AS PATH prefixes and the prefix with the shortest AS PATH is preferred Alternatively you can prepend AS_PATH so that the path is less preferred 4 When the AS PATHs are the same length the algorithm compares the path origin s Prefixes with an Interior Gateway Protocol (IGP) origin are preferred to Exterior Gateway Protocol (EGP) origins and EGP origins are preferred to unknown origins 5 When the origins are the same the algorithm compares the router IDs of the advertising routes The lowest router ID is preferred 6 When the router IDs are the same the algorithm compares the BGP peer IP addresses The lowest peer IP address is preferred Finally AWS limits the number of routes per BGP session to 100 routes AWS will send a reset and tear down the BGP connection if the number of routes exceeds 100 routes per session AWS APN Partners – Direct Connect as a Service Direct Connect partners in the AWS Partner Network (APN) can help you establish sub1G highspeed connectivity as a service between your network and a Direct Connect location To learn more about how APN partners can help you extend your MPLS infrastructure to a Direct Connect location as a service see https://awsamazoncom/directconnect/partners/ ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 9 Colocation with AWS Direct Connect Colocation with Direct Connect means placing the CGW in the same physical facility as Direct Connect location (https://awsamazoncom/directconnect/partners/) to facilitate a local cross connect between the CGW and AWS devices Establishing network connectivity between your MPLS infrastructure and an AWS colocation center offers you an additional level of flexibility and control at the AWS interconnect If you are interested in establishing a Direct Connect connection in the Direct Connect facility you will need to order a circuit between your MPLS Provider and the Direct Connect colocation facility and connect the circuit to your device A second circuit will then need to be ordered through the AWS Direct Connect console from the CE/CGW to AWS Benefits AWS Direct Connect offers the following benefits:  Traffic separation and isolation You can satisfy compliance requirements that call for data segregation You also have the ability to define a public and private VRF across the same Direct Connect connection and monitor specific data flows for security and billing requirements  Traffic engineering granularity You have greater ability to define and control how data moves in to and out of your AWS environment You can define complex BGP routing rules filter traffic paths move data in to and out of one VPC to another VPC You also have the ability to define which data flows through which VRF This is particularly important if you need to satisfy specific compliance for data intransit  Security and monitoring functionality If you choose to monitor onpremises communication you can span ports or install tools that monitor traffic across a particular VRF You can place firewalls in line to meet internal security requirements You can also control communication by enforcing certain IP addresses to communicate across specific VLANs  Simplified integration of IT and data platforms in mergers and acquisitions In a merger and acquisition (M&A) scenario where both companies have the same MPLS provider you can ask the MPLS ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 10 provider to attach a network tonetwork interface ( NNI ) between the two companies This will enable both companies to have a direct path to Amazon VPCs Your colocation router can serve as a transit to allow for the exchange of routes between the two companies If the companies do not share the same MPLS provider the acquiring company can order an additional circuit from their CGW to the acquired compan y’s MPLS to the colocation router and carve out a VRF for that connection Considerations There are a few business and technology design requirements to consider if you are interested in setting up your router in a colocation facility:  Design Requirements: The technical requirements for certain large enterprise customer can be complex A colocation infrastructure can simplify the integration with complex network designs especially if there is a need to manipulate routes or a need to extend a private MPLS network to the CGW  PE/CE Management: Some MPLS providers offer managed Customer Equipment support bundled with their MPLS service offering Taking advantage of this service may reduce operational burden while taking advantage of the discounted bundled pricing that comes with the service Architecture Scenarios Colocation Architecture At a very high level a customer’s colocated CGW sits between the AWS VGW and the MPLS PE The CGW connects to AWS VGW over a cross connection and connects to the customers MPLS provider equipment over a last mile circuit (cross connect that may or may not reside in the same colocation facility) It is possible that the MPLS provider edge (PE) resides in the same direct connect facility In that situation two LOA’s will exist The first between your CGW and AWS and the second between your CGW and your MPLS provider The first LOA can be requested via AWS console and either you or the MPLS provider can request the second LOA via the direct connect facility ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 11 Figure 1 shows a physical colocation topology for single data center connectivity to AWS Figure 1: Single data center connection over MPLS with customermanaged CGW in a colo cation scenario Note: If the MPLS provider is also in the same facility as the direct connect facility then the last mile connection shown in the diagram above will be a cross connection Figure 2 outlines the logical colocation topology for single data center connecti on to AWS In this scenario you establish an eBGP connection between the customer ’s colocat ed router/device and AWS We recommend that the customer also establish an eBGP connectivity from their CGW to the customer ’s MPLS PE Figure 2: Highlevel eBGP topology in a colocation scenario ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 12 Note: If the MPLS provider is also in the same facility as the direct connect facility then the last mile connection shown in the diagram ab ove will be a cross connection NonColocation Topology At a high level there are two possible scenarios for a noncolocation architecture  The first architectural consideration is a scenario where the customers MPLS or circuit provider has facility access to AWS Direct Connect facility You create an LOA request from AWS console and work with your MPLS provider to request the facility cross connection  The secondary architectural consideration is a scenario where are customers MPLS provider does not have facility access and needs to work with one of our Direct Connect partners to extend a circuit from the MPLS PE to the AWS environment For a list of AWS partners please use this link: https://awsamazoncom/directconnect/partners/ The following noncolocation topology diagram shows how the MPLS providers PE is used as the CGW The customer can request their vendor to create the required 8021Q VLAN s on the vendors PE routers Note Some vendor s may c onsider this request a custom configuration so it is worth checking with the provider if this type of setup is supportable ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 13 Figure 3: Single dat a center connection over MPLS with vendor PE as CGW in a noncolocation scenario Note: If the MPLS provider is also in the same facility as the direct connect facility then the last mile connection shown in the diagram ab ove will be a cross connection Similar to the previous colocation BGP design the customer has to establish eBGP connections However this time instead of peering with a colocated device the customer can peer directly with the MPLS provider ’s PE Figure 4 shows an example of a the logical eBGP noncolocation topology Figure 4: Highlevel eBGP connection in a noncolocation scenario ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 14 MPLS Architecture Scenarios The following three scenarios illustrate how you can integrate AWS into an MPLS architecture Scenario 1: MPLS Connectivity over a Single Circuit Architecture Topology The diagram below shows a highlevel architecture of how existing or new MPLS locations can be connected to AWS In this architecture customers can achieve any toany connectivity between their geographically dispersed office or data center locations with their VPC Figure 5 : Single MPLS connectivity into Amazon VPC Physical Topology The customer decides how much bandwidth is required to connect to their AWS Cloud Based on your last mile connectivity requirements one end of this circuit extends through the MPLS provider ’s point of presen ce (POP) to the Provider Equipment (PE) device The other end of the circuit terminates in a meet me room or telecom cage located in one of Direct Connect facilities The Direct ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 15 Connect facility will set up a crossconnection that extends the circuit to AWS devices Figure 6: Highlevel physical topology between AWS and MPLS PE The following are the prerequisites to establish an MPLS connection to AWS: 1 Create an AWS account if you don’t already have one 2 Create an Amazon VPC T o learn how to set up your VPC see http://docsawsamazoncom/AmazonVPC/latest/GettingStarted Guide/gettingstartedcreatevpchtml 3 Request an AWS Direct Connect connection by selecting the region and your partner of choice : http://docsawsamazoncom/directconnect/latest/UserGuide/Col ocationhtml 4 Once completed AWS will email you a Letter of Authorization (LOA ) which describes the circuit information at the Direct Connect facility 5 If the MPLS provider has facility access to the AWS Direct Connect facility they can establish the required cross connection based on the LOA If the MPLS provider is not already in the Direct Connect facility a new connection must be built into the facility or the MPLS provider can utilize a Direct Connect partner (tier 2 extension) to gain facility access ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 16 Once the physical circuit is up the next step is to establish IP data communication and routing between AWS the P E device and the customer ’s network Create a virtual interface to begin using your Direct Connect connection A virtual interface is an 8021Q Layer 2 VLAN that helps segment and direct the appropriate traffic over the Direct Connect interface You can create a public virtual interface to connect to public resources or a private virtual interface to connect to resources in your VPC To learn more about working with virtual interfaces see http://docsawsamazoncom/directconnect/latest/UserGuide/WorkingWithVir tualInterfaceshtml Work with your MPLS provider to create the corresponding 8021Q Layer 2 VLAN on the PE Once the layer 2 VLAN link is up the next step is to assign IP Addresses and establish BGP connectivity You can download the IP/BGP configuration information from your AWS Management Console which can act as a guide for setting up your IP/BGP connection To learn more about downloading the router configuration see http://docsawsamazoncom/directconnect/latest/UserGuide/getstartedhtml# routerconfig When the BGP communication is established from each location and routes are exchanged all locations connected to the MPLS network should be able to communicate with the attached VPC on AWS Make sure to verify any routing policy that may be implemented within the MPLS provider and Customer Network that may be undesirable Figure 7: Logical 8021q VLANs diagram ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 17 In the setup in Figure 7 you can create VLANs that connect your MPLS PE device to AWS VPC Each VLAN (represented by different colors) is tagged with a VLAN ID that identifies the logical circuit and isolates traffic from one VLAN to another Design Decisions and Criteria There are a few design considerations you should be aware of:  Contact your MPLS provider to confirm support to create an 8021Q VLAN’s on their MPLS PE and if they have a VLAN ID preference (if they have multiple circuits utilizing the same physical Direct Connect interface they may require control of the VLAN ID)  Validate the number of VPCs you will need to support your business and if VPC Peering will support your InterVPC communication For more information about VPC Peering see: http://docsawsamazoncom/AmazonVPC/latest/PeeringGuide/peering scenarioshtml  If multiple circuits are using the same physical Direct Connect interface verify that the interface is configured for the appropriate bandwidth  Validate if your business requirements or existing technology constraints such as IP overlap dictate the need to design complex VRF architectures NAT or complex interVPC routing  Validate if your BGP routing policy requires complex BGP prefix configurations such as community strings ASPath Filtering etc You may have to consider a colocation design if:  Your MPLS provider is unable to provide 8021Q VLAN configurations  You have a requirement to implement additional complex routing functionalities that will require route path manipulation or stripping off AS# or integrating BGP communities with routes you are learning from AWS before injecting them into your routing domain See the following section for colocation scenarios ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 18 Exchanging Routes AWS supports only BGP v4 as the routing protocol of choice between your AWS VGW and CGW BGP v4 allows you to exchange routes dynamically between the AWS VGW and the customer CGW or MPLS provider edge (PE) There are a few design considerations when setting up your BGP v4 routing with AWS We will consider two basic topology scenarios Scenario 11 : MPLS PE as CGW – MPLS provider supports VLANs In this scenario the customer has plans to use the MPLS PE as their CGW The MPLS provider will be responsible for the following configuration changes on the PE:  Set up 8021q VLANs required to support the number of VPCs or VLANs that the customer n eeds across the DX Connection Each VLAN will be assigned a /31 IP address (larger prefixes are supported if equipment does not support /31)  Enable a BGP session between AWS and the MPLS provider’s PE across each VLAN Both the customer and the MPLS provider will have to agree on the BGP AS# to assign to the PE The peering relationship in this scenario will look similar to this: AWS ASN (7224)  eBGP  MPLS PE ASN eBGP Customer ASN Figure 8 shows a simple topology outlining the peering relationship ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 19 Figure 8: BGP peering relationship Note The customer will have to work with the MPLS provider to limit the number of routes advertised to AWS to 100 routes per BGP peer session AWS will tear down the BGP sessions if more than 100 routes are received from the MPLS provider Scenario 12: CE is located in an AWS colocation facility In this scenario the customer plans to deploy a customer managed CGW in the Direct Connect colocation facility for the following reasons: 1 The MPLS provider cannot support multiple VLANs directly on their PE 2 The customer requires control of configuration changes and does not want to be restricted to the MPLS provider’s maintenance windows or other constraints The customer has to maintain strict technology configuration standards of all devices in their domain 3 The customer seeks to achieve the following additional technical objectives: a Ability to remove AWS BGP Community Strings or add BGP community strings before injecting routes into the customers MPLS network ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 20 b Ability to strip BGP AS number and/or inject routes into an IGP to support interVPC routing c A merger and acquisition scenario where the customer will terminate multiple MPLS circuits into their device to facilitate data migration into AWS d The customer plans to integrate each VLAN into its own VRF for compliance reasons or to support a complex routing functionality e The customer requires security demarcation such as a firewall between AWS and the customers MPLS network to meet internal security policies f The customer wants to extend their Private Layer 2 MPLS network to their CGW Colocation Physical Topology The end toend connection between AWS and the MPLS PE can be broken down into the following components as shown in Figure 9 Figure 9: End toend physical and logical connection  VPC to Virtual Private Gateway VGW o This logical construct extends your VPC to the VGW For more information about VGW see http://docsawsamazoncom/AmazonVPC/latest/UserGuide/VPC _VPNhtml  VGW to colocated CGW ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 21 o The connection between the VGW to the colocated CGW is a physical cross connect that connects AWS equipment to the customers colocated CGW The logical connection from your VPC is extended over a Layer 2 VLAN across the cross connect to a port on the CGW  CGW to MPLS PE: o This is the connection between the colocate d CGW and the MPLS PE The customer can order this circuit from their provider of choice After the physical topology is confirmed and tested the next step is to establish BGP connectivity between the following:  AWS and the customer’s CGW  The CGW and the MPLS PE As a best practice AWS recommends the use of VRFs to achieve high agility security and scalability VRFs provide an additional level of isolation across the routing domain to simplify troubleshooting See the article Connecting A Single customers router to Multiple VPC to learn more about how to deploy VRFs Similar to the BGP topology in scenario 11 the customer must assign an ASN # for each VRF Each eBGP peering relationship in this scenario will look like the following: VPC  eBGP  CGW  eBGP  MPLS PE eBGP Customer AS# Figure 10 shows a simple topology outlining the peering relationship ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 22 Figure 10: BGP connection over 8021Q VLAN This topology offers the customer the highest level of control and flexibility at the cost of supporting colocated devices AWS recommends a best practice of building a highavailability colocation architecture that supports dual routers dual last mile circuits and dual direct connections In the previous scenario each virtual network interface (VIF) is associated with a single VLAN which in turn is associated with a unique eBGP peering session The colocation router acts as your CGW and exchanges routing updates across each VIF Scenario 2: Dual MPLS Connectivity to a Single Region Architecture Topology This architecture builds upon Scenario 1 and incorporates a highly available redundant connection to AWS The difference between Scenario 1 and Scenario 2 is the additional MPLS circuit in Scenario 2 ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 23 Figure 11: Dual MPLS connection to a single AWS Region This whitepaper will consider two dual connectivity architectures in the way we considered single connectivity architecture The first architectural scenario will focus on the customer leveraging their MPLS Provider PE as their CGW and the second architectural scenario will focus on a colocati on strategy Architectural Scenario 21: MPLS PE as CGW In this scenario the customer plans to have dual connectivity from their MPLS network to AWS in the same region AWS APN partners offer geographically dispersed POP s if you want to have dual last mile connectivity to AWS For example if you are planning to connect to the USEast Region you can connect to a New York Point of Presence (POP) and to a Virginia Point of Presence (POP) as well POP diversity offers the highest level of redundancy resilience and availability from the POP and circuit diversity perspective You can be protected within a region from an MPLS circuit outage and MPLS POP outages Figure 12 depicts dual connectivity from geographically dispersed MPLS POP s to AWS ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 24 Figure 12: Dual physical connection to multiple MPLS POPs Highly Available topology considerations In this scenario you can desig n an active/active or active/passive BGP routing topology Active/Passive An active/passive routing design calls for a routing policy that uses one path as primary and leverages a second path in the event that the primary circuit is down Active/Active An active/active routing design calls for a routing policy that load balances data across both MPLS last mile circuits as they send or receive data from AWS You can influence outbound traffic from AWS by advertising the routes using equal ASPath lengths Likewise AWS advertises routes from AWS equally across both circuits to your MPLS network You can also design your network to support perdestination routing where you send half your routes over one link and the other half over the second link Each link will serve as a redundant path for nonprimary destinations With this approach both circuits are used actively and only if any one of the links fail all traffic flow through the other link In either case the ASPath between the MPLS provider and AWS may resemble something like this: ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 25 AWS ASN  eBGP  CGW ASN  eBGP  MPLS AS N Path 1 AWS ASN  eBGP  CGW ASN  eBGP  MPLS AS N Path 2 Figure 13 depicts a possible BGP topology design Figure 13: In region dual connectivity BGP topology An eBGP neighbor relationship is established between AWS and the two CGWs otherwise known as the provider PEs Similar to Scenario 1 you work with your MPLS provider to support 8021Q VLANs on your PE The routing topology can be more granular and can offer additional levels of traffic differentiation based on the design you select You can choose to direct all traffic that f its a specific profile across one physical link while using the secondary link as a failover path Each VPC can be presented with two logical direct connections (a single VGW per VPC) This allows you to load balance traffic from each VPC across each circuit by creating the required VLANs VIFs and establishing two BGP neighbor relationships across each VLAN ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 26 Figure 14: BGP routing topology scenario Connectivity from Two AWS Locations to a single MPLS POP There are a few situations where it can be better to have both customer devices (CGWs) in the same POP:  MPLS providers may not have POPs close to each AWS POP location  You may have a requirement for active/active circuit topology and your application is extremely sensitive to latency differenc es between the circuits originating from different POPs  Due to MPLS POP diversity limitations one of the circuits may require a longhaul connectivity causing packets to arrive at different times which can impact the ability to load balance  Redundant facilities and long haul termination may be cost prohibitive If you are faced with these issues you can still achieve regional diversity by connecting both DX locations to a single MPLS POP Design Decisions and Criteria The difference between an architecture with MPLS POP diversity and one without is geographical diversity However you must still exercise due diligence when setting up both circuits ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 27 1 Ensure you have end toend circuit diversity from your circuit provider Ensure circuits sharing the same conduit and/or fiber path leaving the facility and throughout the path to the final destination 2 Ensure the circuit does not terminate on the same switch or router to mitigate hardware failure 3 Ensure each device leverages different power source s and Layer 1 infrastructure These design principles are the same regardless of geographical diversity Architectural Scenario 2 2: CGW Colocated in AWS Facility The rationale to colocate are the same as those outlined in Scenario 1 If you decide that colocation is a good approach then you can design a highly available fully redundant architecture to a single region In this scenario the customer can colocate their equipment in AWS facility by either working with an AWS partner who has local facility access or by the customer setting up local facility access in one of our AWS Direct Connect facilities To achieve the higher level of redundancy resilience and scalability the customer can incorporate the following best practice designs:  Dual connection between both CGW s A dual connection between the routers will allow you to accomplish the following: o Create a highly available path to each routing device o Extend each VLAN to each routing device in a highly available manner  Dual connection from each CGW to two MPLS PEs This will provide a high level of resilience and redundancy between your CGW and PE Traffic can be load balanced and provide failover capability in the event of circuit or equipment failure ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 28 Figure 15: Dual circuit to a single MPLS POP BGP topology Conclusion AWS offers customers the ability to connect different WAN technologies in a highly reliable redundant and scalable way The goal of AWS is to ensure that customers are not limited by constraints when accessing their resources on AWS Contributors The following individuals and organizations contributed to this document:  Authors o Jacob Alao Solutions Architect o Justin Davies Solutions Architect  Reviewer o Aarthi Raju Partner Solutions Architect Further Reading For additional information about Layer 3 MPLS technology see the following: ArchivedAmazon Web Services – Integrating AWS with Multiprotocol Label Switching Page 29  http://wwwnetworkworldcom/article/2297171/networksecurity/mpls explainedhtml  http://wwwjunipernet/documentation/en_US/junos123/topics/conce pt/mpls exseriesvpnlayer2layer3html For additional Information about Layer 2 MPLS technology see the following :  http://wwwjunipernet/documentation/en_US/junos123/topics/conce pt/mpls exseriesvpnlayer2layer3html Notes 1 http://docsawsamazoncom/AmazonVPC/latest/UserGuide/VPC_Introducti onhtml 2 http://docsawsamazoncom/directconnect/latest/UserGuide/Welcomehtml 3 http://docsawsamazoncom/AmazonVPC/latest/UserGuide/VPC_Internet_ Gatewayhtml 4 http://docsawsamazoncom/AmazonVPC/latest/NetworkAdminGuide/Intro ductionhtml 5 https://awsamazoncom/articles/5458758371599914 6 http://docsawsamazoncom/AmazonVPC/latest/UserGuide/VPC_Route_Ta bleshtml#routetablespriority Archived
General
AWS_Operational_Resilience
This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsoperational resilience/awsoperationalresiliencehtmlPage 1 Amazon Web Services ’ Approach to Operational Resilience in the Financial Sector & Beyond First published March 2019 Updated April 02 2021 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilience in the Financial Sector & Beyond 2 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Opera tional Resilience in the Financial Sector & Beyond 3 Contents Introduction 5 What does operational resilience mean at AWS? 5 Operational resilience is a shared responsibility 5 How AWS maintains operational resilience and continuity of service 6 Incident management 8 Customers can achieve and test res iliency on AWS 8 Starting with first principles 9 From design principles to implementation 11 Assurance mechanisms 14 Independent thirdparty verification 14 Direct assurance for customers 15 Document revisions 16 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilience in the Financial Sector & Beyond 4 Abstract The purpose of this paper is to describe how Amazon Web Services ( AWS ) and our customers in the financial services industry achieve operational resilience using AWS services The primary audience of this paper is organizations with an interest in how AWS and our financial services customers can operate services in the face o f constant change ranging from minor weather events to cyber issues This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilience in the Financial Sector & Beyond 5 Introduction AWS provides information technology (IT) services and building blocks that all types of businesses public authorities universities and individuals utilize to become more secure innovative and responsive to their own needs and the needs of their customers AWS offers IT services in categories ranging from compute storage database and networking to artificial intelligence and machine learning AWS standardizes its servi ces and makes them available to all customers including financial institutions Across the world financial institutions have used AWS services to build their own applications for mobile banking regulatory reporting and market analysis AWS and the finan cial services industry share a common interest in maintaining operational resilience ; for example the ability to provide continuous service despite disruption Continuity of service especially for critical economic functions is a key prerequisite for fi nancial stability AWS recognizes that financial institutions which use AWS services need to comply with sector specific regulatory obligations and internal requirements regarding operational resilience These obligations and requirements are found inte r alia in IT guidelines1 and cyber resilience guidance2 Financial institution customers are able to rely on AWS to provide resilient infrastructure and services while at the same time designing their applications in a manner that meets regulatory and compliance obligations This dual approach to operational resilience is something that we call “shared responsibility” What does operational resilience mean at AWS? Operational resilience is the ability to provide continuous service through people proces ses and technology that are aware of and adaptive to constant change It is a realtime execution oriented norm embedded in the culture of AWS that is distinct from traditional approaches in Business Continuity Disaster Recovery and Crisis Management which rely primarily on centralized hierarchical programs focused on documentation development and maintenance Operational resilience is a shared responsibility AWS is responsible for ensuring that the services used by our customers —the building blocks for their applications —are continuously available as well as ensuring that we are prepared to handle a wide range of events that could affect our infrastructure This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ A pproach to Operational Resilience in the Financial Sector & Beyond 6 In this paper we also explore customers’ responsibility for operational resilience —how customers can design deploy and test their applications on AWS to achieve the availability and resiliency they need including for mission critical applications that require almost no downtime Those kinds of applications require that AWS infrastructur e and services are available when customers need them even upon the occurrence of a disruption As discussed below customers are able to use AWS’s services to design applications that meet this standard and provide a level of security and resilience that we consider is greater than what existing on premises IT environments can offer Finally given the importance of operational resilience to our customers this paper explore s the variety of mechanisms AWS offers to customers to demonstrate assurance3 How AWS maintains operational resilience and continuity of service AWS builds to guard against outages and incidents and accounts for them in the design of AWS services —so when disruptions do occur their impact on customers and the continuity of services is as minimal as possible To avoid single points of failure AWS minimizes interconnectedness within our global infrastructure AWS’s global infrastructure is geographically dispersed over five continents It is composed of 20 geographic Regions which are composed of 61 Availability Zones (AZs) which in turn are composed of data centers4 The AZs which are physically separated and independent from each other are also bu ilt with highly redundant networking to withstand local disruptions Regions are isolated from each other meaning that a disruption in one Region does not result in contagion in other Regions Compared to global financial institutions’ on premises environ ments today the locational diversity of AWS’s infrastructure greatly reduces geographic concentration risk We are continuously adding new Regions and AZs and you can view our most current global infrastructure map here: https://awsamazoncom/about aws/global infrastructure At AWS we employ compartmentalization throughout our infrastructure and services We have multiple constructs that provide different levels of independent r edundant components Starting at a high level consider our AWS Regions To minimize interconnectedness AWS deploys a dedicated stack of infrastructure and services to each Region Regions are autonomous and isolated from each other even though we allow customers to replicate data and perform other operations across Regions To allow these cross Region capabilities AWS takes enormous care to ensure that the This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilience in the Financial Sector & Beyond 7 dependencies and calling patterns between Regions are asynchronous and ring fenced with safety mec hanisms For example we have designed Amazon Simple Storage Service (Amazon S3) to allow customers to replicate data from one Region ( for example USEAST 1) to another Region (eg US WEST 1) but at the same time we have designed S3 to operate autonom ously within each Region so that an outage of S3 in US EAST does not result in an S3 outage in US WEST5 The vast majority of services operate entirely within single Regions The very few exceptions to this approach involve services that provide global d elivery such as Amazon Route 53 (an authoritative Domain Name System) whose data plane is designed for 100000% availability As discussed below financial institutions and other customers can architect across both multiple Availability Zones and Regions Availability Zones (AZs) which comprise a Region and are composed of multiple data centers demonstrate further compartmentalization Locating AZs within the same Region allows for data replication that provides redundancy without a substantial impact on latency —an important benefit for financial institutions and other customers who need low latency to run applications At the same time we make sure that AZs are independent in order to ensure services remain available in the event of major incidents AZs have independent physical infrastructure and are distant from each other to mitigate the effects of fires floods and other events Many AWS services run autonomously within AZs; this means that if one AZ within a single Region loses power or connectivi ty the other AZs in the Region are unaffected or in the case of a software error the risk of that error propagating is limited AZ independence allows AWS to build Regional services using multiple AZs that in turn provide high availability to and resiliency for our customers In addition AWS leverages another concept known as cell based architecture Cells are multiple instantiations of a service that are isolated from each other; these internal service structures are invisible to customers In a cell based architecture resources and requests are partitioned into cells which are capped in size This design minimizes the chance that a disruption in one cell —for example one subset of customers —would disrupt other cells By reducing the blast radius of a given failure within a service based on cells overall availability increases and continuity of service remains A rough analogy is a set of watertight bulkheads on a ship: enough bulkheads appropriately designed can contain water in case the ship’s h ull is breached and will allow the ship to remain afloat This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilience in the Financial Sector & Beyond 8 Incident management Although the likelihood of such incidents is very low AWS is prepared to manage large scale events that affect our infrastructure and services AWS becomes aware of incidents or degradations in service based on continuous monitoring through metrics and alarms high severity tickets customer reports and the 24x7x365 service and technical support hotlines In case of a significant event an on call engineer convenes a call with p roblem resolvers to analyze the event to determine if additional resolvers should be engaged A call leader drives the group of resolvers to find the approximate root cause to mitigate the event The relevant resolvers will perform the necessary actions to address the event After addressing troubleshooting repair procedures and affected components the call leader will assign follow up documentation and actions and end the call engagement The call leader will declare the recovery phase complete after th e relevant fix activities have been addressed The post mortem and deep root cause analysis of the incident will be assigned to the relevant team Post mortems are convened after any significant operational issue regardless of external impact and Correct ion of Errors (COE) documents are composed such that the root cause is captured and preventative actions may be taken for the future Implementation of the preventative measures is tracked during weekly operations meetings Customers can achieve and test resiliency on AWS AWS believes that financial institutions should ensure that they —and the critical economic functions they perform —are resilient to disruption and failure whatever the cause Prolonged outages or outright failures could ca use loss of trust and confidence in affected financial institutions in addition to causing direct financial losses due to failing to meet obligations AWS builds —and encourages its customers to build —for failure to occur at any time Similarly as the Ba nk of England recognizes “We want firms to plan on the assumption that any part of their infrastructure could be impacted whatever the reason” In the design building and testing of their applications on AWS customers are able to achieve their object ives for operational resilience AWS offers the building blocks for any type of customer from financial institutions to oil and gas companies to government agencies to construct applications that can withstand large scale events In this section This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilience in the Financial Sector & Beyond 9 we walk through how financial institution customers can build that type of resilient application on the AWS cloud Starting with first principles AWS field teams composed of technical managers solution architects and security experts help financial institutio n customers build their applications according to customers’ design goals security objectives and other internal and regulatory requirements As reflected in our shared responsibility model customers remain responsible for deciding how to protect their data and systems in the AWS Cloud but we offer workbooks guidance documents and on site consulting to assist in the process Before deploying a mission critical application —whether on the AWS cloud or in another environment —significant financial institu tion customers will go through extensive development and testing For a customer who begins building an application on AWS with high availability and resiliency in mind we recommend that they begin by answering some fundamental questions6 including but not limited to: 1 What problems are you trying to solve? 2 What specific aspects of the application require specific levels of availability? 3 What is the amount of cumulative downtime that this workload can realistically accumulate in one year? 4 What is the actual impact of unavailability? Financial institutions and market utilities perform both critical and non critical types of functions in the financial services sector From deposit taking to loan processing trade execution to securities settlement finan cial entities across the world perform services whose continuity and resiliency are necessary to ensure the public’s trust and confidence in the financial system At the industrywide level for systemically important payment clearing settlement and othe r types of applications central banks and market regulators specify a discrete recovery time objective in the Principles for Financial Market Infrastructures (PFMI) standard: “The [business continuity] plan should incorporate the use of a secondary site a nd should be designed to ensure that critical information technology (IT) systems can resume operations within two hours following disruptive events The plan should be designed to enable the FMI to complete settlement by the end of the day of the disrupti on even in case of extreme circumstances”7 Beyond the 2 hour RTO financial regulatory agencies expect regulated entities to be able to meet RTOs and recovery point objectives (RPOs) according to the criticality of This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilience in the Financial Sector & Beyo nd 10 their applications beginning with “Ti er 1 application” as the most critical For example regulated entities may classify their RTO and RPOs in the following way: Table 1 — How regulated entities classify RTO and RPO Resiliency requirement Tier 1 app Tier 2 app Tier 3 app Recovery Time Objective 2 Hours < 8 Hours 24 Hours Recovery Point Objective < 30 seconds < 4 Hours 24 Hours Although systemically important financial institutions may have upwards of 8000 to 10000 applications they do not classify all applications according to the same criticality For example disruptions in an application for processing mortgage loan requests are undesirable but a financial institution operating such an application may decide that it can tolerate an 8 hour RTO Other types of important but not n ecessarily systemically important workloads include post trade market analysis and customer facing chatbots While the majority of financial entities’ applications are non critical from a systemic perspective disruption of some Tier 1 applications would jeopardize not only the safety and soundness of the affected financial institution but also other financial services entities and possibly the broader economy For example a settlement application may be a Tier 1 application and have an associated RTO of 30 minutes and an RPO of < 30 seconds Such applications are the heart of financial markets and disruptions could cause operational liquidity and even credit risks to crystallize For such applications there is little to virtually no time for humans to make an active decision on how to recover from an outage or failover to a backup data center Recovery would need to be automatic and triggered based on metrics and alarms8 AWS provides guidance to customers on best practices for building highly available resilient applications including through our Well Architected Framework9 For example we recommend that the components comprising an application should be independent and isolated to provide redundancy When changing components or configurati ons in an application customers should make sure that they can roll back any changes to the application if it appears that the changes are not working Monitoring and alarming should be used to track latency error rates and availability for each request for all downstream dependencies and for key operations Data gathered through monitoring should allow for efficient diagnosis of problems10 Best practices for distributed systems This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilience in the Financial Sector & Beyond 11 should be implemented to enable automated recovery Recovery paths should be tested frequently —and most frequently for complex or critical recovery paths For financial institutions it can be difficult to practice these principles in traditional on premises environments many of which reflect decades of consolidation with oth er entities and ad hoc changes in their IT infrastructures On the other hand these principles are what drive the design of AWS’s global infrastructure and services and form the basis of our guidance to customers on how to achieve continuity of service11 Financial institutions using AWS services can take advantage of AWS’s services to improve their resiliency regardless of the state of their existing systems From design principles to implementation Customers have to make many decisions: where to place t heir content where to run their applications and how to achieve higher levels of availability and resiliency For example a financial institution can choose to run its mobile banking application in a single AWS Region to take advantage of multiple AZs Figure 1 Example of Multi AZ Design Let’s take the example of a deployment across 2 AZs to illustrate how AZ independence provides resiliency As shown in Figure 1 the customer deploys its mobile banking application so that its architecture is stable and consistent across AZs ; for example the workload in each AZ has sufficient capacity as This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilience in the Financial Sector & Beyond 12 well as stable infrastructure configurations and policies that keep both AZs up to date Elastic Load Balancing routes traffic only to healthy instances and data layer replication allows for fast failover in case a database instance fails in one AZ thus minimizing downtime for the financial institution’s mobile banking customers Compared to AWS’s infrastructure and services traditional on premises environ ments present several obstacles for achieving operational resilience For example let’s assume a significant event shuts down a financial institution’s primary on premises data center The financial institution also has a secondary data center in additio n to its primary data center The capacity of the secondary data center is able to handle only a proportion of the overall workload that would otherwise operate at the primary data center ( for example 11000 servers at the secondary center instead of 120 00 servers at the primary center; network capacity increased 300% at the primary center in the last 4 years but only 250% at the secondary center) and errors in replication mean that the secondary center’s data has not been updated in 36 hours Furthermor e macroeconomic factors have driven transaction volume higher at the primary data center by 15% over the past 6 months As a result the financial institution may find that its secondary data center cannot process current transaction volume within a given time period per its internal and regulatory requirements By using AWS services the financial institution would have been able to increase its capacity at frequent intervals to support increasing transaction volumes as well as track and manage changes t o maintain all of its deployments with the same up todate capacity and architecture In addition customers can maintain additional “cold” infrastructure and backups on AWS that can activate if necessary —at much lower cost than procuring their own physic al infrastructure This is not a hypothetical issue —key regulatory requirements highlight the need for regulated entities to account for capacity needs in adverse scenarios12 On AWS customers can also deploy workloads across AZs located in multiple Regio ns (Figure 2) to achieve both AZ redundancy and Region redundancy Customers that have regulatory or other requirements to store data in multiple Regions or to achieve even greater availability can use a multi Region design In a multi Region set up the customer will need to perform additional engineering to minimize data loss and ensure consistent data between Regions A routing component monitors the health of the customer’s application as well as dependencies This routing layer will also handle automat ic failovers changing the destination when a location is unhealthy and temporarily stopping data replication Traffic will go only to healthy Regions This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilience in the Financi al Sector & Beyond 13 AWS improves operational resilience compared to traditional on premises environments not only for failo ver but also for returning to full resiliency For the financial institution with a secondary data center it may have to perform data backup and restoration over several days Many traditional environments do not feature bidirectional replication result ing in current data at the backup site and “outdated” data in the primary site that makes fast failback difficult to achieve On AWS the financial institution is not “stuck” as it would be in a traditional environment —it can fail forward by quickly launch ing its workload in another location The key point is that AWS’s global infrastructure and services offer financial institutions the capacity and performance to meet aggressive resiliency objectives To achieve assurance about the resiliency of their appl ications we recommend that financial institution customers perform continuous performance load and failure testing; extensively use logging metrics and alarms; maintain runbooks for reporting and performance tracking; and validate their architecture t hrough realistic full scale tests known as “game day” exercises Per the regulatory requirements in their jurisdictions financial institutions may provide evidence of such tests runbooks and exercises to their financial regulatory authorities Figure 2 — Example of multiRegion design This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilience in the Financial Sector & Beyond 14 Assurance mechanisms We are prepared to deliver assurance about AWS’s approach to operational resilience and to help customers achieve assurance about the security and resiliency of their workloads Financial institution s and other customers can gain assurance about the security and resiliency of their workloads on AWS through a variety of means including: reports on AWS’s infrastructure and services prepared by independent third party auditors; services and tools to mo nitor assess and test their AWS environments; and direct experience with AWS through our audit engagement offerings Independent thirdparty verification With our standardized offering and millions of active customers across virtually every business segment and in the public sector we provide assurance about our risk and control environment including how we address operational resilience AWS operates thousands of controls that meet the highest standards in the industry To understand these controls and how we operate them customers can access our System and Organization Control (SOC) 2 Type II report reflecting examination by our independent thirdparty auditor which provides an overview of the AWS Resiliency Program Furthermore an ind ependent third party auditor has validated AWS’s alignment with ISO 27001 standard The International Organization for Standardization (ISO) brings together experts to share knowledge and to develop and publish uniform international standards that support innovation and provide solutions to global challenges In addition to ISO 27001 AWS also aligns with the ISO 27017 guidance on information security in the cloud and ISO 27018 code of practice on protection of personal data in the cloud The basis of thes e standards are the development and implementation of a rigorous security program The Information Security Management System (ISMS) required under the ISO 27001 standard defines how AWS manages security in a holistic comprehensive manner and includes num erous control objectives (eg A16 and A17) relevant to operational resilience With a non disclosure agreement in place customers can download these reports and others through AWS Artifact — more than 2 600 security controls standards and requirements in all AWS can provide such reports upon request to regulatory agencies AWS also aligns with the National Institute of Standards and Technology (NIST) Cybersecurity Framework (CSF) Developed originally to apply to critical infrastructure entities the foundational set of security disciplines in the CSF can apply to any organization in any s ector and This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilience in the Financial Sector & Beyond 15 regardless of size The US Financial Services Sector Coordinating Council has developed a Financial Services Sector Specific Cybersecurity Profile (available here) that maps the CSF to a variety of international US federal and US state standards and regulations AWS’s alignment with CSF attested by a third party auditor reflects the suitability of AWS services to enhance the security and resiliency of fina ncial sector entities Direct assurance for customers Customers may also achieve continuous assurance about the resilience of their own workloads Through services and tools available from the AWS management console customers have unprecedented visibility monitoring and remediation capabilities to ensure the security and compliance of their own AWS environments Financial institution customers no longer have to rely on periodic snapshots or quarterly and annual assessments to validate their security and compliance Consider just a few examples of the many ways customers achieve direct assurance about the security and compliance of their AWS resources13 First customers can integrate their auditing controls into a notification and workflow system using AW S services For example in such a system a change in the state of a virtual server from pending to running would result in corrective action logging and as needed notify the appropriate personnel Customers can also integrate their notification and w orkflow system with a machine learning driven cybersecurity service offered by AWS that detects unusual API calls potentially unauthorized deployments and other malicious activity Second customers can also translate discrete regulatory requirements in to customizable managed rules and continuously track configuration changes among their resources; for example if a bank has a requirement that developers cannot launch unencrypted storage volumes the bank can predefine a rule for encryption that would flag the volume for non compliance and automatically remove the volume Finally and third another AWS service allows customers to automatically assess the security of their environment targeting their network file system and process activity and collecti ng a wide set of activity and configuration data This data includes details of communication with AWS services use of secure channels details of the running processes network traffic among the running processes and more —resulting in a list of findings and security problems ordered by severity This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilienc e in the Financial Sector & Beyond 16 While these and other services correct for non compliant configurations or security vulnerabilities AWS also recommends that customers test their applications for operational resilience Financial institution cu stomers should test for the transient failures of their applications’ dependencies (including external dependencies) component failures and degraded network communications One major customer has developed open source software that can be a basis for this type of testing To address concerns that malicious actors may access critical functions or processes in customers’ environments customers can also conduct penetration testing of their AWS environments14 Finally AWS’s efforts to provide transparency about our risk and control environment do not stop at our third party audit reports or formal audit engagements Our security and compliance personnel security solution architects engineers a nd field teams engage daily with customers to address their questions and concerns Such interaction may be a phone call with the financial institution’s security team an executive meeting with a customer’s Chief Information Security Officer and Chief Information Officer a briefing on AWS’s premises — and countless other ways Customers drive our overall infrastructure and service roadmap and meeting and exceeding their security and resiliency needs is our number one objective Document revisions Date Description April 02 2021 Reviewed for technical accuracy March 2019 First publication Notes 1 US Federal Financial Institution Examination Council (FFIEC) IT Handbook; see https://ithandbookffiecgov 2 Committee on Payments and Market Infrastructures and Board of the International Organization of Securities Commissions (CPMI IOSCO) Guidance on cyber resilience This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilience in the Financial Sector & Beyond 17 for financial market infrastructures (June 2016); see https://wwwbisorg/cpmi/publ/ d146pdf 3 This paper reflects only an overview of our ongoing efforts to ensure our customers can use AWS services safely To complement our concept of shared responsibility we are also dedicated to excee ding customer and regulatory expectations To that end AWS technical teams security architects and compliance experts assist financial institutions customers in meeting regulatory and internal requirements including by actively demonstrating their secu rity and resiliency through continuous monitoring remediation and testing AWS continuously engages with financial regulators around the world to explain how AWS’s infrastructure and services enable all sizes and types of financial institutions —from fintech startups to stock exchanges —to improve their security and resiliency compared to on premises environments We always want to receive feedback from customers and their regulators about AWS’s approach and their experience 4 You ca n take a virtual tour of an AWS data center here: https://awsamazoncom/compliance/data center 5 As evidenced by the Amazon S3 service disruption of February 28 2017 which occurred in the Northern Virginia (US EAST 1) Region but not in other Regions See “Summary of the Amazon S3 Service Disruption in the Northern Virginia (US EAST 1) Region” https://awsamazoncom/message/41926/ 6 We recommend that customers review the Cloud Adoption Framework to develop efficient and effectiv e adoption plans See Reliability Pillar AWS Well Architected Framework 7 Key Consideration 176 of PFMI available at https://wwwbisorg/cpmi/publ/d101apdf 8 Customers can enable automatic recovery using a variety of AWS services including Amazon Cl oudWatch metrics Amazon CloudWatch Events and AWS Lambda See also the following AWS re:Invent presentati on “Disaster Recovery and Business Continuity for Financial Institutions ” for additional information on applicable AWS services and example architecture: https://wwwyoutubecom/watch?v=Xa xTwhP 1UU 9 See https://awsamazoncom/architecture/well architected 10 A variety of AWS services support these practices; for examples see pp 26 28 at https://d0awsstaticcom/whitepapers/ architecture/AWS Reliability Pillarpdf This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilience in the Financial Sector & Beyond 18 11 For a comprehensive overview of our guidance to customers see the “Reliability Pillar” whitepaper (September 2018) at https:// d0awsstaticcom/whitepapers/archit ecture/AWS Reliability Pillarpdf 12 See for example US Securities and Exchange Commission (SEC) Regulation Systems Compliance and Integrity 17 CFR § 240 242 & 249; see also adopting release: https://wwwsecgov/rules/final/2014/34 73639pdf See also FFIEC Business Continuity Planning IT Examination Handbook (February 2015) available at https://ithandbookffiecgov/media/274725/ffiec_itbooklet_businesscontinuityplanningp df 13 The AWS services discussed in this section include: Amazon CloudWatch Events AWS Config Amazon GuardDuty AWS Config Rules and Amazon Inspector 14 For example in the United Kingdom the Bank of England has developed the CBEST framework for testing financial firms’ cyber resilience Accredited penetration test companies attempt to access critical assets within the target firm An accredited threat intelligence company provides threat intelligence and provides guidance how the penetration testers can attack the firm Financial institution customers subject to the CBEST framework and planning to have a penetration test conducted on their AWS resources n eed to notify AWS by submitting a request (at https://awsamazoncom/security/penetration testing ) because such activity is indistinguishable from prohibited security violations and netwo rk abuse
General
Use_AWS_Config_to_Monitor_License_Compliance_on_Amazon_EC2_Dedicated_Hosts
ArchivedUse AWS Config to M onitor License Compliance on Ama zon EC2 Dedicated Hosts April 2016 This paper has been archived For the latest technical guidance about Amazon EC2 see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers/ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 2 of 16 © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 3 of 16 Contents Abstract 4 Introduction 4 Setting Up AWS Config to Track Dedicated Hosts and EC2 Instances 5 Creating a Custom Rule to Check that Launched Instances Are on a Dedicated Host 7 Addressing Other Bring Your Own License (BYOL) Compliance Requirements with AWS Config Rules 15 Conclusion 15 Contributors 16 Further Reading 16 ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 4 of 16 Abstract Amazon Elastic Compute Cloud (EC2) Dedicated Hosts can help enterprises reduce costs by allowing the use of existing serverbound licenses Many customers can also use Dedicated Hosts to address corporate compliance and regulatory requirements Oftentimes customers using Dedicated Hosts want to continuously record and evaluate changes to their infrastructure to stay compliant with license terms and regulatory requirements This paper outlines the ways in which you can leverage AWS Config and AWS Config Rules to monitor license compliance on Amazon EC2 Dedicated Hosts Introduction This paper discusses how you can set up AWS Config to record configuration changes to Amazon EC2 Dedicated Hosts and EC2 instances in order to ascertain your licensing compliance posture Y ou’ll learn how t o create AWS Config Rules to govern the way your serverbound licenses are used on Amazon Web Services (AWS) We’ll create a sample rule that checks whether all instances in an account created from an Amazon Machine Image (AMI) called MyWindowsImage are launched onto a specific Dedicated H ost We’ll also describe other checks that can be employed to monitor compliance with common licensing restrictions and to govern your Dedicated Host resources An Amazon EC2 Dedicated Host is a physical server with EC2 instance capacity fully dedicated for your use You get complete visibility into the number of sockets and physical cores that support your instances on a Dedicated Host Dedicated Hosts allow you to place your instances on a specific physical server This level of visibility and control in turn allows you to use your existing per socket percore or pervirtual machine ( VM) software licenses (eg Microsoft Windows Server) to save costs and meet compliance and regulatory requirements To track the history of instances that are launched stopped or terminated on a Dedicated Host you can use AWS Config AWS Config pairs this information with host and instancelevel information relevant to software licensing such as ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 5 of 16 the host ID AMI IDs and number of sockets and physical cores per host You can then use this data to verify usage against your licensing metrics You can use AWS Config Rules to choose from a set of prebuilt rules based on common AWS best practices or define custom rules You can set up rules that check the validity of changes made to resources tracked by AWS Config against policies and guidelines defined by you You can set these AWS Config Rules to evaluate each change to the configuration of a resource or you can execute them at a set frequency You can also author your own custom rules by creating AWS Lambda functions in any supported language Setting Up AWS Config to Track Dedicated Hosts and EC2 Instances Open the AWS Management Console and go to the EC2 console On the EC2 Dedicated Hosts page notice the Edit Config Recording button at the top The icon in red indicates that AWS Config is not currently set up to record configuration changes to Dedicated Hosts and instances Figure 1: Edit Config Recording Button with the Red Icon on Dedicated Host Console ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 6 of 16 Getting started with AWS Config is simple Click the Edit Config Recording button to open the AWS Config settings page On this page check Record all resources supported in this region Figure 2: Selecting Resource Types to Record on the AWS Config Settings Page You can choose to only enable recording for Dedicated Hosts and instances by selecting these resources in Specific types If you are setting up AWS Config for the first time you must specify an Amazon S3 bucket into which AWS Config can deliver configuration history and snapshot files Optionally you can also provide an Amazon Simple Notification Service (SNS) topic to which change and compliance notifications will be delivered Finally you’ll be asked to grant appropriate permissions to AWS Config and save the settings For more details on setting up AWS Config using the AWS Management Console or the CLI see the Getting Started with AWS Config documentation ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 7 of 16 After the AWS Config setup is complete you’ll notice that the icon on the EC2 console page for Dedicated Hosts has turn ed green This indicates that AWS Config is recording configuration changes to all EC2 instances and Dedicated Hosts Figure 3: The Edit Config Recording Button with Green Icon Creating a Custom Rule to Check that Launched Instances Are o n a Dedicated Host Now that you have set up AWS Config to start recording configuration changes to Dedicated Hosts and EC2 instances you can start writing rules to evaluate the license compliance state of all instances in the account To get started you will write a rule that checks whether all instances launched from the MyWindowsImage AMI are placed onto a specific Dedicated Host For this sample assume that MyWindowsImage is the name of an AMI you have imported and is the machine image of a Microsoft Server license you own Before creating the rule first inspect the instances and Dedicated Hosts on your account: Look up EC2 Instance and EC2 Host resource types In Figure 4 you can see one Dedicated Host and a number of instances ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 8 of 16 Figure 4: Review the Resource Inventory Click the icon for the Dedicated Host to go to the Config Timeline to see the configuration of the Dedicated Host including the sockets cores total vCPUs and available vCPUs You can also see all the instances that are currently running on the host Traversing the timeline provides all historical configurations of the Dedicated Host including the instances that were launched onto the Dedicated Host in the past You also can look into the Config timeline of each of those instances Figure 5: The Config Resource Configuration History Timeline ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 9 of 16 Next you will set up the new rule in AWS Config and write the AWS Lambda function for the rule To do this click Add rule in the AWS Config console and then click Create AWS Lambda function to set up the function you want to execute Figure 6: AWS Config Rule Creation Page ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 10 of 16 On the Lambda console select the configrulechangetriggered blueprint to get sta rted Figure 7: The Lambda Select Blueprint Page You can annotate compliance states To do this first add a global variable called annotation var aws = require( 'aws sdk'); var config = new awsConfigService(); var annotation; You also need to modify the evaluateCompliance function and the handler invoked by AWS Lambda The rest of the blueprint code can be left untouched function evaluateCompliance(configurationItem ruleParameters context) { checkDefined(configurationItem " configurationItem "); checkDefined(configurationItemconfiguration "configurationItem configuration "); checkDefined(ruleParameters " ruleParameters "); if ( 'AWS::EC2::Instance' !== configurationItemresourceType) { return 'NOT_APPLICABLE' ; } if (ruleParametersimageId === configurationItemconfigurationimageId ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 11 of 16 && ruleParametershostId !== configurationItemconfigurationplacementhostId) { annotation = "Instance " + configurationItemconfigurationinstanceId + " launc hed from BYOL AMI " + configurationItemconfigurationimageId + " has not been placed on dedicated host " + ruleParametershostId ; return 'NON_COMPLIANT' ; } else { return 'COMPLIANT' ; } For this example function imageId and hostId are parameters that are passed to the function by the AWS Config rule that will be created next The imageId parameter will contain the AMI ID of MyWindowsImage Use this to identify instances that are launched from this image After you detect that an instance was launched from MyWindowsImage you then can check whether the instance was launched onto the specified Dedicated Host identified by the hostId parameter The instance is marked noncompliant if it is found to be not running on the host on which all instances launched from MyWindowsImage should be running You can annotate compliance states of a resource with additional information indicating why the resource was marked noncompliant This sample elaborates the details of why the instance was marked noncompliant and assigns this text to the annotation global variable Finally changes are made to the handler to pass on the annotation along with the rest of the compliance information ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 12 of 16 putEvaluationsRequestE valuations = [ { ComplianceResourceType: configurationItemresourceType ComplianceResourceId: configurationItemresourceId ComplianceType: compliance OrderingTimestamp: configurationItemconfigurationItemCaptureTime Annotation: annotation } ]; After changes are made to the AWS Lambda function select the appropriate role and save the function In our example we also noted the Amazon Resource Name (ARN) of the function After the function is created go back to the AWS Config console and enter the ARN of the function that was just created ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 13 of 16 Figure 8: Entering the AWS Lambda Function ARN on the AWS Config Rul e Creation Page After specifying the appropriate settings for the rule save it The rule is evaluated once immediately after it is created and thereafter for any changes that are made to EC2 instances In this example two instances were launched from MyWindowsImage out of which only one was launched onto the specified Dedicated Host The AWS Config rule marks the other instance noncompliant Figure 9: Instance Marked as Noncompliant ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 14 of 16 The Compliant or Noncompliant state for each rule is also sent as a notification via the Amazon SNS topic you created when you set up AWS Config You can configure these notifications to send an email trigger a corrective action or log a ticket The Amazon SNS notification contains details about the change in compliance state including the annotation that elaborates the reason for noncompliance View the Timeline for this Resource in AWS Config Management Console: https://consoleawsamazoncom/config/home?region=useast 1#/timeline/AWS::EC2::Instance/ia46d7125?time=2016 0128T02:02:35606Z New Compliance Change Record: { "awsAccountId": "434817024337" "configRuleName": "restrictedAMI" "configRuleARN": "arn:aws:config:us east 1:434817024337:config rule/config rule hz8yxz" "resourceType": "AWS::EC2::Instance" "resourceId": "i a46d7125" "awsRegion": "us east 1" "newEvaluati onResult": { "evaluationResultIdentifier": { "evaluationResultQualifier": { "configRuleName": "restrictedAMI" "resourceType": "AWS::EC2::Instance" "resourceId": "i a46d7125" } "orderingTimestamp": "2016 0128T02:02:35606Z" } "complianceType": "NON_COMPLIANT" "resultRecordedTime": "2016 0128T02:02:41417Z" "configRuleInvokedTime": "2016 0128T02:02:40396Z" "annotation": "Instance i a46d7125 launched from BYOL AMI ami 60b6c60a has not been placed on dedicated host h 086f4a5066fb7b991" "resultToken": null } "oldEvaluationResult": { "evaluationResultIdentifier": { "evaluationResultQualifier": { "configRuleName": "restrictedAMI" "resourceType": "AWS::E C2::Instance" "resourceId": "i a46d7125" } "orderingTimestamp": "2016 0128T01:44:54553Z" ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 15 of 16 } "complianceType": "COMPLIANT" "resultRecordedTime": "2016 0128T01:45:03438Z" "configRuleInvokedTime": "2016 0128T01:45:01298Z" "annotation": null "resultToken": null } "notificationCreationTime": "2016 0128T02:02:42317Z" "messageType": "ComplianceChangeNotification" "recordVersion": "10" } Addressing Other Bring Your Own License (BYOL) Compliance Requirements with AWS Config Rules The AWS Config rule created in the example above checks one of the several compliance requirements you may have associated with BYOL serverbound licenses This rule can be further extended to check other licensespecific restrictions such as the following :  Host affinity of the instances  Number of sockets or number of cores of the Dedicated Host onto which the instances are launched  Duration for which an instance needs to be on a specified Dedicated Host In addition you can also monitor the utilization of Dedicated Hosts you own and mark them noncompliant if their usage drops below a threshold This can help you optimize your fleet of Dedicated Hosts Conclusion In this paper you learned how you can use AWS Config in conjunction with AWS Config r ules to ascertain your license compliance posture on Amazon EC2 Dedicated Hosts AWS Config can be more broadly used to monitor and govern all your resources For more information see Further Reading below ArchivedAmazon Web Services – Use AWS Config to Monitor License Compliance on EC2 Dedicated Hosts April 2016 Page 16 of 16 Contributors The following individuals and organizations contributed to this document:  Chayan Biswas Senior Product Manager AWS Config Further Reading For additional help please consult the following sources:  Documentation on what AWS Config supports : Supported Resources Configuration Items and Relationships  Blog post: How to Record and Govern your IAM Resource Configurations Using AWS Config  AWS Config product page: AWS Config
General
Tagging_Best_Practices_Implement_an_Effective_AWS_Resource_Tagging_Strategy
This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlTagging Best Practices Implement an Effective AWS Resource Tagging Strategy December 2018 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtml © 2018 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assuranc es from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtml Content s Introduction: Tagging Use Cases 1 Tags for AWS Console Organization and Resource Groups 1 Tags for Cost Allocation 1 Tags for Automation 1 Tags for Operations Support 2 Tags for Access Control 2 Tags f or Security Risk Management 2 Best Practices for Identifying Tag Requirements 2 Employ a Cross Functional Team to Identify Tag Requirements 2 Use Tags Consistently 3 Assign Owners to Define Tag Value Propositions 3 Focus on Required and Conditionally Required Tags 3 Start Small; Less is More 4 Best Practices for Naming Tags and Resources 4 Adopt a Standardized Approach for Tag Names 4 Standardize Names for AWS Resources 5 EC2 Instances 6 Other AWS Resour ce Types 6 Best Practices for Cost Allocation Tags 7 Align Cost Allocation Tags with Financial Reporting Dimensions 7 Use Both Linked Accounts and Cost Allocation Tags 8 Avoid Multi Valued Cost Allocation Tags 9 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtml Tag Everything 9 Best Practices for Tag Governance and Data Management 9 Integrate with Authoritative Data Sources 9 Use Compound Tag Values Judiciously 10 Use Automation to Proactively Tag Resources 12 Constrain Tag Values with AWS Service Catalog 12 Propagate Tag Values Across Related Resources 13 Lock Down Tags Used for Access Control 13 Remediate Untagged Resources 14 Implement a Tag Governance Process 14 Conclusion 15 Contributors 15 References 15 Tagging Use Cases 15 Align Tags with Financial Reporting Dimensions 16 Use Both Linked Accounts and Cost Allocation Tags 16 Tag Everything 16 Integrate with Authoritative Data Sources 16 Use Compound Tag Values Judiciously 16 Use Automation to Proactively Tag Resources 17 Constrain Tag Values with AWS Service Catalog 17 Propagate Tag Values Across Related Resources 17 Lock Down Tags Used for Access Control 17 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtml Remediate Untagged Resources 17 Document Revisions 18 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtml Abstract Amazon Web Services allows customers to assign metadata to their AWS resources in the form of tags Each tag is a simple label consisting of a customer defined key and an optional value that can make it easier to manage search for and filter resources Although there are no inherent types of tags they enable customers to categorize resources by purpose owner environment or other criteria Without the use of tags it can become diff icult to manage your resources effectively as your utilization of AWS services grows However it is not always evident how to determine what tags to use and for which types of resources The goal of this whitepaper is to help you develop a tagging strategy that enables you to manage your AWS resources more effectively This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 1 Introduction: Tagging Use Cases Amazon Web Services allows customers to assign metadata to their AWS resources in the form of tags Each tag is a simple label consisting of a customer defined key and an optional value that can make it easier to manage se arch for and filter resources by purpose owner environment or other criteria AWS tags can be used for many purposes Tags for AWS Console Organization and Resource Groups Tags are a great way to organize AWS resources in the AWS Management Console You can configure tags to be displayed with resources and can search and filter by tag By default the AWS Management Console is organized by AWS service However the Resource Groups tool allows customers to create a custom console that organizes and consolidates AWS resources based on one or more tags or portions of tags Using this tool customers can c onsolidate and view data for applications that consist of multipl e services and resources in one place Tags for Cost Allocation AWS Cost Explorer and Cost and Usage Report support the ability to break down AWS costs by tag Typically customers use bu siness tags such as cost center business unit or project to associate AWS costs with traditional financial reporting dimensions within their organization However a cost allocation report can include any tag This allows customers to easily associate costs with technical or security dimensions such as specific applications environments or compliance programs Table 1 shows a partial cost allocation report Table 1: Partial cost allocation report Tags for Automation Resource or service specific tags are often used to filter resources during infrastructure automation activities Tags can be used to opt in to or out of automated tasks or to identify This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 2 specific versions of resources to archive update or delete For examp le many customers run automated start/stop scripts that turn off development environments during non business hours to reduce costs In this scenario Amazon Elastic Compute Cloud (Amazon EC2) instance tags are a simple way to identify the specific develo pment inst ances to opt into or out of this process Tags for Operations Support Tags can be used to integrate support for AWS resources into day today operations including IT Service Management (ITSM) processes such as Incident Management For example Le vel 1 support teams could use tags to direct workflow and perform business service mapping as part of the triage process when a monitoring system triggers an alarm Many customers also use tags to support processes such as backup/restore and operating syst em patching Tags for Access Control AWS Identity and Access Management ( IAM) policies support tag based conditions enabling customers to constrain permissions based on specific tags and their values For example IAM user or role permissions can include conditions to limit access to specific environments ( for example development test or production) or Amazon Virtual Private Cloud (Amazon VPC) networks based on their tags Tags for Security Risk Management Tags can be assigned to identify resources that require heightened security risk management practices for example Amazon EC2 instance s hosting applications that process sensitive or confidential data This can enable automated compliance checks to ensure that proper access controls are in place patc h compliance is up to date and so on The sections that fol low identify recommended best practices for developing a comprehensive tagging strategy Best Practices for Identifying Tag Requirements Employ a Cross Functional Team t o Identify Tag Requirements As noted in the introduction tags can be u sed for a variet y of purposes In order to develop a comprehensive strategy it’s best to assemble a cross functional team to identify tagging This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 3 requirements Tag stakeholders in an organization typically include IT Finance Information Security application owners cloud a utomation teams middleware and database administration teams and process owners for functions such as patching backup/restore monitoring job scheduling and disaster re covery Rather than meeting with each of these functional areas separately to ident ify their tagging needs conduct tagging requirements workshops with representation from all stakeholder groups so that each can hear the perspectives of the others and integrate their requirements more effectively into the overall strategy Use Tags Cons istently It’s important to employ a consistent approach in tagging your AWS resources If you intend to use tags for specific use cases as illustrated by the examples in the introduction you will need to rely on the consistent use of tags and tag values For example if a significant portion of your AWS resources are missing tags used for cost allocation your cost analysis and reporting process will be more complicated and time consuming and probably less accurate Likewise if resources are missing a t ag that identifies the presence of sensitive data you may have to assume that all such resources contain sensitive data as a precautionary measure A consistent approach is warranted even for tags identified as optional For example if you employ an opt in approach for automatically stopping development environments during non working hours identify a single tag for this purpose rather than allowing different teams or departments to use their own ; resulting in many diffe rent tags all serving the same purpose Assign Owners to Define Tag Value Propositions Consider tags from a cost/benefit perspective when deciding on a list of required tags While AWS does not charge a fee for the use of tags there may be indirect costs (for example the labor needed to assign and maintain correct tag values for each relevant AWS resource ) To ensure tags are useful i dentify an owner for each one The tag owner has the responsibility to clearly articulate its value proposition Having tag owners may help avoid unnecessary costs related to maintaining tags that are not used Focus on Required and Conditionally Required Tags Tags can be required conditionally required or optional Conditionally required tags are only mandatory under certai n circumstances (for example if an application processes sensitive data This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 4 you may require a tag to identify the corresponding data classification such as Personally Identifiable Information or Protected Health Information ) When identifying tagging requirements focus on required an d conditionally required tags Allow for optional tags as long as they conform to your tag naming and governance policies t o empower your organization to define new tags for unforeseen or bespoke application requ irements Start Small ; Less is More Tagging decisions are reversible giving you the flexibility to edit or change as needed in the future However there is one exception —cost allocation tags —which are included in AWS monthly cost allocation reports The data for these reports is based on AWS services utilization and captured monthly As a result when you introduce a new cost allocation tag it take s effect starting from that point in time The new tag will not apply to past cost allocation reports Tags help you identify sets of resources Tags can be removed when no longer needed A new tag can be applied to a set of resources in bulk however you need to identify the resources requiring the new tag and the value to assign those resources Start with a smaller set of tags that are known to be need ed and create new tags as the need arise s This approach is recommended over specifying an overabundance of tags that are anticipated to be needed in the future Best Practices for Naming Tags and Resources Adopt a Standardized Approach for Tag Names Keep in mind that names for AWS tags are case sensitive so ensure that they are used consistently For example the tags CostCenter and costcenter are different so one might be configured as a cos t allocation tag for financial analysis and reporting and the other one might not be Similarly the Name tag appears in the AWS Console for many resources but the name tag does not A number of tags are predefined by AWS or created automatically by various AWS services Many AWS defined tags are named using all lowercase with hyphen s separating words in the name and prefixes to identify the source service for the tag For example: This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 5 • aws:ec2spot:fleet request id identifies the Amazon EC2 Spot Instance Request that launched the instance • aws:cloudformation:stack name identifies the AWS CloudFormation stack that created the resource • lambda console:blueprint identifies blueprint used as a te mplate for an AWS Lambda function • elasticbeanstalk:environment name identifies the applic ation that created the resource Consider naming your tags using all lowercase with hyphens separating words and a prefix identifying the organization name or abbreviated name For example for a fictitious company named AnyCompany you might define tags such as : • anycompany :cost center to identify the internal Cost Center code • anycompany :environment type to identif y whether the environment is developmen t test or production • anycompany :application id to identify the application the resource was created for The prefix ensure s that tags are clearly identified as having been defined by your organization and not by AWS or a third party tool that you may be u sing Using all lowercase with hyphens for separators avoids confusion about how to capitalize a tag name For example anycompany :project id is simpler to remember than ANYCOMPANY :ProjectID anycompany :projectID or Anycompany :ProjectId Standardize Names for AWS Resources Assigning names to AWS resources is another important dimension of tagging that should be considered This is the value that is assigned to the predefined AWS Name tag (or in some cases by other means) and is mainly used in the AWS Management Console To understand the idea here it’s probably not helpful to have dozens of EC2 instances all named MyWebServer Developing a naming standard for AWS resources will help you keep your resources organized and can be used in AWS Cost and Usage Reports for grouping related resources together (see also Propagate Tag Values Across Related Resources below) This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 6 EC2 Instances Naming for EC2 instances is a good place to start Most organizations have already recognized the need to standardize on server hostnames and have existing practices in effect For example an organization might create hostnames based on several components such as physical location environment type (development test production ) role/ purpose application ID and a unique identifier: First note that the various components of a hostname construction process like this are great candidates for individual AWS tags – if they were important in the past they’ll likely be important in the future Even if the se elements are captured as separate individual tags i t’s still reasonable to continue to use this style of server naming to maintain consistency and substituting a different physical location code to represent AWS or an AWS region However if you’re moving away from treating your virtual instances like pets and more like cattle (which is recommended ) you’ll want to automate the assignment of server names to avoid having to assign them manually As an alternative you could simply use the AWS instance id (which is globally unique) for your server name s In either case if you ’re also creating DNS names for servers it’s a good idea to associate the value used for the Name tag with the Ful ly Qualified Domain Name ( FQDN) for the EC2 instance So if your instance name is phlpwcspweb3 the FQDN for the server could be phlpwcspweb3a nycompany com If you’d rather use the instance id for the Name tag then y ou should use that in your FQDN (for example i06599a3 8675anycompany com) Other AWS Resource Types For other types of AWS resources one approach is to adopt a dot notation consisting of the following name components : 1 account name prefix: for example production development shared services audit etc Philadelphia data center productionweb tier Customer Service Portalunique identifier phlpwcspweb3 = phl p w csp web3 hostname:This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 7 2 resource name: freeform field for the logical name of the resource 3 type suffix: for example subnet sg role policy kmskey etc See Table 2 for examples of tag names for other AWS resource types Table 2: Sample tag names for other AWS resource types Resource Type Example AWS Resource Name account name resource name type Subnet prod public az1subnet Production public az1 subnet Subnet services az2subnet Shared Services az2 subnet Security Group prod webserversg Production webserver sg Security Group devwebserversg Development webserver sg Security Group servicesdmzsg Shared Services dmz sg IAM Role prodec2 s3accessrole Production ec2s3 access role IAM Role drec2 s3accessrole Disaster Recovery ec2s3 access role KMS Key proda nycompany kmskey Production AnyCompan y kmskey Some resource types limit the character set that can be used for the name In such cases the dot character s can be replaced with hyphen s Best Practices for Cost Allocation Tags Align Cost Allocation Tags with Financial Reporting Dimensions AWS provides detailed cost reports and data extracts to help you monitor and manage your AWS spend When you designate specific tags as cost allocation tags in the AWS Billing and Cost Management Console billing data for AWS resources will include the m Remember b illing information is point intime data so cost allocation tags appear in your billing data only after This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 8 you have (1) specified them in the Billing and Cost Management Console and (2) tagged resources with them A natural place to identify the cost allocation tags you need is by looking at your current IT financial reporting practices Typically financ ial reporting covers a variety of dimensions such as business unit cost center product geographic area or department Aligning cost allocation tags with these financial reporting dimensions simplif ies and streamline s your AWS cost management Use Both Linked Accounts and Cost Allocation Tags AWS resources are c reated within accounts and billing reports and extracts contain the AWS account number for all billable resources regardless of whether or not the resources have tags You can have multiple accounts so creating different accounts for different financial entities within your organization is a way to clearly segregate costs AWS provides options for consolidated billing by associating payer accounts and linked accounts You can also use AWS Organizations to c reate master accounts with associated member accounts to take advantage of the additional centralized management and governance capabilities Organizations may design their account structure based on a number of factors including fiscal isolation administrative isolation access isolation blast radius isolation engineering and cost considerations ( refer to the References section for links to relevant articles on AWS Answers) Examples include: • Creating separate accounts for production and non product ion to segregate communications and access for these environments • Creating a separate account for shared services components and utilities • Creating a separate audit account to captur e log files for security forensics and monitoring • Creating separate accounts for disaster recovery Understand your organization ’s account structure when developing your tagging strategy since alignment of some of the financial reporting dimensions may already be captured by your account structure This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 9 Avoid Multi Valued Cost Allocation Tags For shared resources you may need to allocate costs to several applications projects or departments One appro ach to allocating costs is to create multi valued tags that contain a series of allocation codes possibly with corresponding allocation ratios for example: anycompany :cost center = 1600|02 5|1625|020|1731|050|1744|005 If designated as a cost allocation tag such tag values appear in your billing data However there are two challenges with this approach: (1) the data will have to be post processed to parse the multi valued tag value s and produce more detailed records a nd (2) you will need to establish a process to accurately set and maintain the tag values If possible consider identify ing existing cost sharing or chargeback mechanisms within your organization —or create new ones —and associate shared AWS resources to individual cost allocation codes defined by that mechanism Tag Everything When developing a tagging strategy be wary of focus ing only on the set of tags need ed for your EC2 instances Remember that AWS allows you to tag most types of resources that generat e costs on your billing reports Apply your cost allocation tags across all resource types that support tagging to get the most accurate data for your financial analysis and reporting Best Practices for Tag Governance and Data Management Integrate with Authoritative Data Sources You may decide to include tags on your AWS resources for which data is already available within your organization For example if you are using a Configuration Management Database (CMDB) you may already have a pr ocess in place to store and maintain metadata about your applications databases and environments Configuration Items (CIs) in your CMDB may have attributes including application or server owner technical issue resolver groups cost center or charge cod e data classification etc Rather than redundantly capturing and maintain ing such existing meta data in AWS tags consider integrating your CMDB with AWS The integration can be bi directional meaning that This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 10 data sourced from the CMDB can be copie d to tag s on AWS resources and data that can be sourced from AWS (for example IP addresses instance IDs and instance types) can be stored as attribu tes in your Configuration Items If you integrate your CMDB with AWS in this way extend your AWS tag naming convention to include an additional prefix to identify tags that have externally sourced values for example: • anycompany :cmdb:application id – the CMDB Configuration Item ID for th e application that owns the resource • anycompany :cmdb:cost center – the Cost Center code associated with the owning application sourced from the CMDB • anycompany :cmdb:application owner – the indiv idual or group that owns the application associated with this resource sourced from the CMDB This makes it clear that the tags are provided for convenience and that the authoritative source of the data is the CMDB Referencing authoritative data sources rather than redundantly maintaining the same data in mul tiple systems is a general data management best practice Use Compound Tag Values Judiciously Initially AWS limited the number of tags for a given resource to 10 result ing in some organizations combin ing several data elements into a single tag using de limiters to segregate the different attributes as in: EnvironmentType = Developm ent;Webserver;Tomcat 62;Tier 2 In 2016 the number of tags per resource was increased to 50 (with a few exceptions such as S3 objects ) Because of this it’ s generally recommended to follow good da ta management practice by including only one data attribute per tag However there are some situations where it may make sense to combine several related attributes together Some examples include: 1 For contact infor mation as shown in Table 3 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 11 Table 3: Examples of compound and single tag values Compound Tag Values anycompany :business contact = John Smith;johnsmith@a nycompany com ;+12015551212 anycompany :technical contact = Susan Jones ;suejones@a nycompany com ;+12015551213 Single Tag Values anycompany :busi ness contact name = John Smith anycompany :business conta ctemail = johnsmith@a nycompany com anycompany :busines scontact phone = +12015551212 anycompany :techni calcontact name = Susan Jones anycompany :technical cont actemail = suejones@a nycompany com anycompany :technica lcontact phone = +12015551213 2 For multi valued tags where a single attribute can have several homogenous values For example a resource support ing multiple applications might use a pipe delimited list: anycompany :cmdb: application ids = APP012|APP 045|APP320|APP450 However before introducing multi valued tags consider the source of the information and how the information will be used if captured in an AWS tag If there is an authoritative source for the data in question then any processes requiring the information may be better served by re ferencing the authoritative source directly rather than a tag Also as recommended in this paper avoid multivalued cost allocation tags if possible 3 For tags used for automation purposes Such tags typically capture opt in and automation status inform ation For example if you implement an AWS Lambda function to automatically back up EBS volumes by taking snapshots you might use a tag that contains a short JSON document: anycompany :auto snapshot = { “frequency”: “daily” “ lastbackup”: “2018 0419T21:18:00000+0000” } This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 12 There are many automation solutions available at AWS Labs ( https://githubcom/awslabs ) and the AWS Marketplace ( https://awsamazo ncom/marketplace ) that make use of compound tag value s in their implementation s Use Automation to Proactively Tag Resources AWS offers a variety of tools to help you implement proactive tag governance practices ; by ensuring that tags are consistently app lied when resources are created AWS CloudFormation provides a common language for provision ing all the infrastructure resources in your cloud environment CloudFormation templates are simple text file s that create AWS resources in an automated and secure manner When you create AWS resources using AWS CloudFormation templates you can use the CloudFormation Resource Tags property to apply t ags to certain resource types upon creation AWS Service Catalog allows organizations to create and manage catalogs o f IT services that are approved for use on AWS These IT services can include everything from virtual machine images servers software and databases to complete multi tier application environments AWS Service Catalog enables a self service capability for users allowing them to provision the services they need while also helping you to maintain consistent governance – including the application of required tags and tag values AWS Identity and Access Management (IAM) enables you to manage access to AWS services and resources securely Using IAM you can create and manage AWS users and groups and use permissions to allow or deny their access to AWS resources When you create IAM policies you can specify resource level p ermissions which include specific permissions for creating and deleting tags In addition you can include condition keys such as aws:RequestTag and aws:TagKeys which will prevent resources from being created if specific tags or tag values are not prese nt Constrain Tag Values with AWS Service Catalog Tags are not useful if they contain missing or invalid data values If tag values are set by automation the automation code can be reviewed tested and enhanced to ensure that valid tag values are used When tags are entered manually there is the opportunity for human error One way to reduce human error is by using AWS Service Catalog One of the key features of AWS Service Catalog is TagOption libraries With TagOption libraries you can specify requir ed tags as well as their range of allowable values AWS Service Catalog organizes your approved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 13 AWS service offerings or products into multiple portfolios You can use TagOption libraries at the portfolio level or even at the individual product level t o specify the range of allowable values for each tag Propagate Tag Values Across Related Resources Many AWS resources are related For example an EC2 instance may have several Elastic Block Storage (EBS) volumes and one or more Elastic Network Interfaces (ENIs) For each EBS volume many EBS snapshots may be created over time For consistency best practice is to propagate tags and tag values across related resources If resources are created by AWS CloudFormation templates they are created together in g roups called stacks from a common automation script which can be configured to set tag values across all resources in the stack For resources not created via AWS CloudFormation you can still implement automation to automatically propagate tags from rela ted resources For example when EBS snapshots are created you can copy any tags present on the EBS volume to the snapshot Similarly you can use CloudWatch Events to trigger a Lambda function to copy tags from an S3 bucket to objects within the bucket a ny time S3 objects are created Lock Down Tags Used for Access Control If you decide to use tags to supplement your access control policies you will need to ensure that you restrict access to creating deleting and modifying those tags For example you can create IAM policies that use conditional logic to grant access to (1) EC2 instances for an IAM group created for developers and (2) for EC2 instances tagged as development This could be further restricted to developers for a particular application based on a condition in the IAM policy that identifies the relevant application ID While the use of tags for this purpose is convenient it can be easily circumvented if users hav e the ability to modify tag values in order to gain access that they should not have Take preventative measures against this by ensur ing that your IAM policies include deny rules for actions such as ec2:C reateTags and ec2:DeleteTags Even with this preven tative measure IAM policies that grant access to resources based on tag values should be used with caution and approved by your Information Security team You may decide to use this approach for convenience in certain situations For example use strict I AM policies (without conditions based on tags) for restricting access to production environments ; but for development environments grant access to application specific resources via tags to help developers avoid inadvertently affecting each other’s work This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 14 Remediate Untagged Resources Automation and proactive tag management are important but are not always effective Many customers also employ reactive tag governance approaches to identify resourc es that are not properly tagged and correct them Reactive tag governance approaches include (1) programmatically using tools such as the Resource Tagging API AWS Config r ules and custom scripts ; or (2) manually using Tag Editor and detailed billing reports Tag Editor is a feature of the AWS Management Console that allows you to search for resources using a variety of search criteria and add modify or delete tags in bulk Search criteria can include resources with or without the presence of a particular tag or value The AWS Resource Tagging API allows you to perform these same functions programmatically AWS Config enables you to assess audit and evaluate the configurations of your AWS resources AWS Config continuously monitors and records your AWS resource configurations and allows you to automate the eva luation of recorded configurations against optimal configurations With AWS Config you can create rules to check resources for required tags and it will continuously monitor your resources against those rules Any non compliant resources are identified on the AWS Config Dashboard and via notifications In the case where resources are initially tagged properly but their tags are subsequently changed or deleted AWS Config will find them for you You can use AWS Config with CloudWatch Events to trigger autom ated responses to missing or incorrect tags An extreme example would be to automatically stop or quarantine non compliant EC2 instances The most suitable governance approach for a n organization primarily depends on its AWS maturity model but even experi enced organizations use a combination of proactive and reactive governance techniques Implement a Tag Governance Process Keep in mind that once you’ve settled on a tagging strategy for your organization you will need to adapt it as you progress through your cloud journey In particular it’s likely that requests for new tags will surface and need to be addressed A basic tag governance process should include : • impact analysis approval and implementation for requests to add change or deprecate tags ; This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 15 • application of existing tagging requirements as new AWS services are adopted by your organization; • monitoring and remediation of missing or incorrect tags; and • periodic reporting on tagging metrics and key process indicators Conclusion AWS resource tags can be used for a wide variety of purposes from implementing a cost allocation process to supporting automation or authorizing access to AWS resources Implementing a tagging strategy can be challenging for some organizations due to th e number of stakeholder groups involved and considerations such as data sourcing and tag governance This white paper recommends a way forward based on a set of best practices to get you started quickly with a tagging strategy that you can adapt as your organization’s needs evolve over time Contributors The following individuals and organizations contributed to this document: Brian Yost Senior Consultant AWS Professional Services References Tagging Use Cases • AWS Tagging Strategies • Tagging Your Amazon EC2 Resources • Centralized multi account and multi Region patching with AWS Systems Manager Automation This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 16 Align Tags with Financial Repor ting Dimensions • Monthly Cost Allocation Report • User Defined Cost Allocation Tags • Cost Allocation for EBS Snapshots • AWS Generated Cost Allocation Tags Use Both Linked Acc ounts and Cost Allocation Tags • Consolidated Billing for Organizations • AWS Multiple Account Billing Strategy • AWS Multiple Account Security Strategy • What Is AWS Organi zations? Tag Everything User Defined Cost Allocation Tags Integrate with Authoritative Data Sources ITIL Asset and Configuration Management in the Cloud Use C ompound Tag Values Judiciously Now Organize Your AWS Resources by Using up to 50 Tags per Resource This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 17 Use Automation to Proactively Tag Resources • How c an I use IAM policy tags to restrict how an EC2 instance or EBS volume can be created? • How to Automatically Tag Amazon EC2 Resources in Response to API Events • Supported Resource Level Permissions for Amazon EC2 API Actions: Resource Level Permissions for Tagging • Example Policies for Working with the AWS CLI or an AWS SDK: Tagging Resources • Resource Tag Constrain Tag Values with AWS Service Catalog • AWS Service Catalog Announces AutoTags for Automatic Tagging of Provisioned Resources • AWS Service Catalog TagOption Library Propagate Tag V alues Across Related Resources CloudWatch Events for EBS Snapshots Lock Dow n Tags Used for Access Control • AWS Services That Work with IAM • How do I create an IAM policy to control access to Amazon EC2 resources using tags? • Controlling Access to Amazon VPC Resources Remediate Untagged Resources • Resource Groups and Tagging for AWS This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ taggingbestpractices/taggingbest practiceshtmlAmazon Web Services – Tagging Best Practices Page 18 • AWS Resource Tagging API Document Revisions Date Description December 2018 First Publication
General
Installing_JD_Edwards_EnterpriseOne_on_Amazon_RDS_for_Oracle
Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle First published December 20 16 Updated March 25 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement betw een AWS and its customers © 2021 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 Why JD Edwards EnterpriseOne on Amazon RDS? 1 Licensing 2 Performance management 3 Instance sizing 3 Disk I/O management —provisioned IOPS 4 High availability 4 High availability features of Amazon RDS 5 Oracle security in Amazon RDS 6 Installing JD Edwards EnterpriseOne on an Amazon RDS for Oracle DB instance 7 Prerequisites 7 Preparation 8 Key installation tasks 8 Creating your Oracle DB instance 8 Configure SQL Developer 13 Installing the platform pack 14 Modifying the default scripts 16 Advanced configuration 23 Running the installer 27 Logging into JD Edwards EnterpriseOne on the deployment server 28 Validation and testing 29 Running on Amazon RDS for Oracle Enterprise Ed ition 30 Conclusion 31 Appendix: Dumping deployment service to RDS 31 Contributors 33 Document revisions 33 Abstract Amazon Relational Database Service (Amazon RD S) is a flexible costeffective easy touse service fo r running relational database s in the cloud In thi s whitepaper you will learn how to deplo y Oracle’ s JD Edward s EnterpriseOne (version 92 ) using Amazon RD S for Oracle Because thi s whitepape r focuse s on the database component s of the installation process ite ms such a s JD Edwards EnterpriseOne application serve rs and application serve r node scaling will not be covered This whitepaper is aimed at IT directors JD Edwards EnterpriseOne architects CNC administrators DevOps engineers and Oracle Database Administrators Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 1 Introduction There are two ways to de ploy the Oracle database backend for a JD Edwards EnterpriseOne installation on Amazon Web Services (AWS): by using a database managed by the Amazon Relational Database Service (Amazon RDS) or by deploying and managing a database on Amazon Elastic Compute Cloud (Amazon EC2) infrastructure This whitepaper focuses on the deployment of JD Edwards EnterpriseOne in an AWS environment using Amazon RDS for Oracle Why JD Edwards EnterpriseOne on Amazon RDS? Simplicity scalability and stability are all important reasons to install the JD Edwards Enter priseOne applications suite on Amazon RDS Integrated high availability features provide seamless recoverability between AWS Availability Zones (AZs) without the complications of log shipping and Oracle Data Guard Using RDS you can quickly back up and restore your database to a chosen point in time and change the size of the server or speed of the disks all within the AWS Management Console Management advantages are at your fingertips with the AWS Console Mobile Application All this coupled with intelligent monitoring and management tools provid es a complete solution for implementing Oracle Database in Amazon RDS for use with JD Edwards EnterpriseOne When designing your JD Edwards EnterpriseOne footprint consider the entire lifecycle of JD Edwards EnterpriseOne on AWS which includes complete disaster recovery Disaster recovery is not an afterthought it’s encapsulated in the design fundamentals When your installation is complete you can take backups refresh subsid iary environments and manage and monitor all critical aspects of your environment from the AWS Management Console You can enable monitoring to ensure that everything is sized correctly and performing well Using Amazon RDS for Oracle you can have enterp risegrade high availability in the database layer implementing Amazon RDS Multi AZ configuration You can use this high availability feature even with Oracle Standard Edition to reduce the to tal cost of ownership (TCO) for running the JD Edwards application in the cloud AWS gives you the ability to disable hyperthreading and the numb er of vCPUs in use in your Amazon Elastic Compute Cloud (Amazon EC2) instances and your RDS for Oracle instances to reduce licensing cost and TCO In JD Edwards EnterpriseOne the application processing is CPU intensive and the CPU frequency and number of cores available to the enterprise server plays a large part affecting the performance and throughput of the system AWS provides a wide range of instance classes including z1d Instances delivering a sustained all core frequency of up to 40 gigah ertz (GHz) the fastest of any cloud instance Using such Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 2 high clock frequency instances for the application tier can help reduce the number of cores needed to run the same workload This means you can get the same performance using a smaller instance clas s This makes AWS a highly suitable public cloud environment for running JD Edwards applications with high performance and throughput requirement AWS Support provides a mix of tools and technology p eople and programs designed to proactively help you optimize performance lower costs and innovate faster With core technological capabilities for running high performance JD Edwards deployments combined with a strong support framework AWS provides a g reat experience for customers as a preferred choice for hosting their JD Edwards implementations Amazon RDS for Oracle is a great fit for JD Edwards EnterpriseOne JD Edwards EnterpriseOne also provides for heterogeneous database support which means that there is a loose coupling between enterprise resource planning (ERP) and the database allowing i nstallation of Microsoft SQL Server for example as an alternative to Oracle Licensing Purchase of JD Edwards EnterpriseOne includes the Oracle Technology Foundation component The Oracle Techno logy Foundation for JD Edwards EnterpriseOne provides all the software components you need to run Oracle’s JD Edwards EnterpriseOne applications Designed to help reduce integration and support costs it’s a complete package of the following integrated ope n standards software products that enable you to easily implement and maintain your JD Edwards EnterpriseOne applications: • Oracle Database • Oracle Fusion Middleware • JD Edwards EnterpriseOne Tools If you have these licenses you can take advantage of the A mazon RDS for Oracle Bring Your OwnLicense (BYOL) option See the Oracle Cloud Licensing Policy for details Note: With the BYOL option you may need to acquire addition al licenses for standby database instances when running Multi AZ deployments See the JD Edwards EnterpriseOne Licensing Information User Manual for a detailed description of the restricted use licenses provided in the Oracle Technology Foundation for the JD Edwards EnterpriseOne product Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 3 Some historical JD Edwards EnterpriseOne licensing agreements do not include Oracle Technology Foundation If that is the case for you you can choose the A mazon RDS “License Included” option which includes licensing costs in the hourly price of the service If you have questions about any of your licensing obligations contact your JD Edwards EnterpriseOne licensing representative For details about licensi ng Oracle Database on AWS see the Oracle Cloud Licensing Policy Performance management Instance sizing Increasing the performance of a database (DB) instance requires an understanding of which server resource is causing the performance constraint If database performance is limited by CPU memory or network throughput you can scale up by choosing a larger instance type In an Amazon RDS environment this type of scali ng is simple Amazon RDS supports several DB instance types At the time of this writing instance types that support the Standard Edition 2 (SE2) socket requirements range from: • The burstable “small” ( dbt3small ) • The latest generation general purpose dbm54xlarge which features 16vCPU 64 gigabytes (GB) of memory and up to 10 billions of bits per second (Gbps) of network performance • The latest generation memory optimized dbr54xlarge with 16 vCPU 128 GB of memory and up to 10 Gbps of network performance • The latest generation memory optimized DB instance class dbz1d3xlarge with a sustained all core frequency of up to 40 GHz 12 vCPUs 96 GB memory and up to 10Gbps network perf ormance • The latest generation memory optimized DB instance class dbx1e4xlarge with very high memory to vCPU ratio 16 vCPUs 488 GB memory and up to 10Gbps network performance For current available instance classes and options see the DB instance class support for Oracle The first time you start your Amazon RDS DB instance choose the instance type that seems most relevant in terms of the number of cores and amount of memory you are using With that as the starting point you can then monitor the performance to determine whether it’s a good fit or whether you need to pick a larger or smaller instance type Amazon Web Services Installing JD Edwards Ente rpriseOne on Amazon RDS for Oracle 4 You can modify the instance class for your Amazon RDS DB instance by using the AWS Management Console or the AWS command line interface (AWS CLI) or by making application programming interface (API) calls in applications written with the AWS Software Development Kit (SDK) Modifying the instance class will cause a restart of your DB instance which you can set to occur right away or during the next weekly maintenance window that you specify when creating the instance (Note that the weekly maintenance window setting can als o be changed) Increasing instance storage size Amazon RDS enables you to scale up your storage without restarting the instance or interrupting active processes The main reason to increase the Amazon RDS storage size is to accommodate database growth but you can also do this to improve input/output (I/O) For an existing DB instance with gp2 EBS volumes you might observe some I/O capacity improvement if you scale up your storage Scaling storage capacity can be done manually or you can set up autoscalin g for storage For details on RDS storage management see Working with Storage for Amazon RDS DB Instances Disk I/O management —provisioned IOPS Provisioned I/O operations per second (IOPS) is a storage option that gives you control over your database storage performance by enabling you to specify your IOPS rate Provisioned IOPS is desig ned to deliver fast predictable and consistent I/O performance At the time of this writing you can provision up to 80000 maximum IOPS per instance for EBSoptimized instance classes The maximum storage size supported in an instance is 64 tebibytes (TiB) Here are some important points about Provisioned IOPS in Amazon RDS: • The maximum ratio of Provisioned IOPS to requested volume size (in GiB) is 50:1 For example a 100 GiB volume can be provisioned with up to 5000 IOPS • If you are using Provisioned IOPS storage AWS recommend s that you use DB instance types that are optimized for Provisioned IOPS You can also convert a DB instance that uses standard storage to use Provisioned IOPS storage • The actual amount of your I/O throughput can vary depending on your workload High availability The Oracle database provides a variety of features to enhance the availability of your databases You can use the following Oracle Flashback technology f eatures in both Amazon RDS and in Amazon EC2 which support multiple types of data recovery: Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 5 • Flashback Transaction Query enables you to see all the changes made by a specific transaction • Flashback Query enables you to query any data at some point in time in the past In addition to these features design a database architecture that protects you against hardware failures data center problems and disasters You can do this by using replication technologies and the high availability features of Amazon RDS described in the following section High availability features of Amazon RDS Amazon RDS makes it simple to create a high availability architecture First in the event of a hardware failure Amazon RDS automatically replaces the compute instance powering y our deployment Second Amazon RDS supports Multi AZ deployments where a secondary (or standby) Oracle DB instance is provisioned in a different Availability Zone (location) within the same region This architecture allows the database to survive a failur e of the primary DB instance network and storage or even of the Availability Zone The replication between the two Oracle DB instances is synchronous helping to ensure that all data written to disk on the primary instance is replicated to the standby instance This feature is available for all editions of Oracle including the ones that do not include Oracle Data Guard providing you with out ofthebox high availability at a very competitive cost For details about high availability features in RDS fo r Oracle see Amazon RDS Multi AZ Deployments The following figure shows an example of a high availability architecture in Amazon RDS High availability architecture in Amazon RDS Amazon Web Services Installing JD Edwards EnterpriseOne on Ama zon RDS for Oracle 6 You should also deploy the rest of the application stack including application and web servers in at least two Availability Zones to ensure that your applications continue to operate in the event of an Availability Zone failure In the design of your high availabi lity implementation you can also use Elastic Load Balancing which automatically distributes the load across application servers in multiple Availability Zones A failover to the standby DB instan ce typically takes between one and three minutes and will occur in any of the following events: • Loss of availability in the primary Availability Zone • Loss of network connectivity to the primary DB instance • Compute unit failure on the primary DB instance • Storage failure on the primary DB instance • Scaling of the compute class of your DB instance either up or down • System maintenance such as hardware replacement or operating system upgrades Running Amazon RDS in multiple Availability Zones has additional bene fits: • The Amazon RDS daily backups are taken from the standby DB instance which means that there is usually no I/O impact to your primary DB instance • When you need to patch the operating system or replace the compute instance updates are applied to the standby DB instance first When complete the standby DB instance is promoted as the new primary DB instance The availability impact is limited to the failover time resulting in a shorter maintenance window Oracle security in Amazon RDS Amazon RDS enables you to control network access to your DB instances using security groups By default network access is limited to other hosts in the Amazon Virtual Private Cloud (Amazon VPC) where your instance is deployed Using AWS Identity and Access Management (AWS IAM) you can manage access to your Amazon RDS DB instances For example you can authorize (or deny) administrative users under your AWS Account to creat e describe modify or delete an Amazon RDS DB instance You can also enforce multi factor authentication (MFA) For more information about using IAM to manage administrative access to Amazon RDS see Identity and access management in Amazon RDS Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 7 Amazon RDS offers optional storage encryption that uses AES 256 encryption and automatically encrypts any snapshots and snapshot restores You can control who can decrypt your data by using AWS Key Management Service (AWS KMS) In addition Amazon RDS supports several Oracle Database security features: • Amazon RDS can protect data in motion using Secure Sockets Layer (SSL) or native network encryption that protects data in motion using Oracle Net Services You can choose between AES Triple DES and RC4 encryption • You can also store database credentials using AWS Secrets Manager Installing JD Edwards EnterpriseOne on an Amazon RDS for Oracle DB instance Installing JD Edwards EnterpriseOne is often seen as a complex task that involves setting up a server manager and the JD Edwards EnterpriseOne deployment server followed by installing the platform pack In this section you will learn an alternative process for installing the platform pack which is tailored to ensure a successful installation of JD Edwards EnterpriseOne on an Amazon RDS for Oracle database instance (referred to from this point on as an Oracle DB instance) Prerequisites To install J D Edwards EnterpriseOne on Amazon RDS for Oracle: • You should be familiar with the JD Edwards EnterpriseOne installation process and have an understanding of the fundamentals of AWS architecture • You should have a functional AWS account with appropriate IAM permissions • You should have created an Amazon VPC with associated Subnet Groups and Security Groups and it is ready for use by the Amazon RDS for Oracle service • You should have a local database on your deployment server that you can connect to with Oracle SQL Developer Note: The deployment server will have two separate sets of Oracle binaries: a 32 bit client and a 64 bit server engine (named e1local ) Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Orac le 8 Preparation The proc ess described in this whitepaper is based on the standard JD Edwards EnterpriseOne installation processes which are described in the JD Edwards EnterpriseOne Applications Installation Guide Prior to continuing follow the instructions in the JD Edwards EnterpriseOne Applications Installation Guide until section 45 (“Understanding the Oracle Installation” ) When you have completed the steps leading up to section 45 follow the rest o f the instructions in this whitepaper to successfully install JD Edwards EnterpriseOne on an Oracle DB instance Key installation tasks The key elements of installing JD Edwards EnterpriseOne on an Oracle DB instance include: • Creating the instance • Configur ing the SQL *Plus Instant Client • Installing the platform pack • Modifying the original installation scripts that are provided Creating your Oracle DB instance Using the AWS Management Console follow these steps 1 From the top menu bar choose Services 2 Choose Database > RDS This opens the Amazon RDS dashboard where you will create your Oracle DB instance 3 Choose Create data base 4 To create an Oracle SE2 environment from the Create database screen do the following: a Under database creation method choose Standard Create b Under Engine options choose Oracle 5 Under Edition choose Oracle Standard Edition Two 6 Under Version choose the latest quarterly release of Oracle Database 19c (which is 19000ru 2020 04rur 2020 04r1 at the time of this publication) Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 9 7 Under License choose bring your ownlicense Oraclese2 must be used in compliance with the latest Oracle l icensing Contact Oracle should further information be required 8 Under Templates choose Production (AWS Management Console recommends using the default values for a production ready environment or a development environment For the purposes of this white paper you will use a production environment) 9 Under Settings enter the configuration details for the database instance and credentials For this example use the following information: • DB Instance Identifier — jde92poc • Master Username — jde92pocMaster • Master Password — jde92pocMasterPassword 10 Under DB instance size choose Memory Optimized classes (includes r and x classes) 11 From the dropdown menu choose db r5xlarge 12 Under Storage : a For Storage type choose General Purpose (SSD) b For Allocated storag e choose 150 GiB c Select (check) Enable storage autoscaling d For Maximum storage threshold select of 500 GiB For the purposes of this example use the settings mentioned above in step 5 steps 8 and 9 and step 10 to choose the Oracle version instance class and storage respectively These settings can be tailored to meet your specific requirements AWS encourage s you to consult with a JD Edwards EnterpriseOne supplier to ensure these settings are appropriate for your specific use case 13 Under Availability & durability choose Create a standby instance (recommended for production usage) 14 Under Connectivity use the preconfigured VPC (JDE92) and the settings shown in the following figure If you have appropriately configured Subnet Groups and VPC Security Groups you can use them here Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 10 Configure network and security settings Note: The rest of this procedur e assumes that you have already created a VPC to accommodate the Amazon RDS for Oracle installation and that the VPC name used is JDE92 If you need help see VPC documentation 15 Under Database authentication options choose Password authentication 16 Expand the Additional configuration section for Database options enter the following settings: • Initial database name — jde92poc • DB par ameter group — defaultoracle se219 • Option group — defaultoracle se219 • Character set — WE8MSWIN1252 17 For the Backup Encryption and Performance Insights sections use the default settings for this example However because these settings do not impact the ability to install JD Edwards EnterpriseOne AWS encourage s you to experiment with and test these settings in your actual implementation 18 Under Monitori ng choose Enable Enhanced monitoring a For Granularity choose 15 seconds b For Monitoring Role select default c Under Log exports choose Alert log Listener log and Trace log d For Maintenance and Deletion Protection select the defaults Because these settings do not impact the ability to install JD Edwards EnterpriseOne you should experiment with and test these settings 19 Click Create database to create the RDS Oracle instance Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 11 Creation of the Oracle DB instance begins This can take some time to comp lete Search for your instance to view the progress Click the refresh icon to watch the progress of the Oracle DB instance creation Refreshing the progress view When the Oracle DB instance is available for use the Status changes to available Connecting to your Oracle DB instance When Amazon RDS creates the Oracle DB instance it also creates an endpoint Using this endpoint you can construct the connection string required to connect directly with your Oracle DB instance To allow network requests to your running Oracle DB instan ce you will need to authorize access For a detailed explanation of how to construct your connection string and get started see the Amazon RDS User Guide Endpoint for the Oracle DB instance The endpoint is allocated a Domain Name System (DNS) entry which you can use for connecting However to facilitate a better instal lation experience for JD Edwards EnterpriseOne a CNAME record is created so the endpoint can be more human readable The CNAME should be created in the Amazon Route 53 local internal zone and should point t o the new Oracle DB instance Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 12 Note: Creating an Amazon Route 53 record set is beyond the scope of this document For more assistance see the Amazo n Route53 User Guide As shown in the following figure you are creating a simple record called jde2poc You provide the RDS instance's endpoint in the Value/Route traffic to section CNAME record set To ensure that connectivity is permitted from the internal subnets in both Availability Zones you will need to edit the security group for the Ora cle DB instance As shown in the following figure you have added an oraclerds inbound rule that is allowing connectivity from our internal IP (source) to the RDS instance Updating the security group Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 13 Configure SQL Developer Oracle SQL Developer i s used to validate that the appropriate connectivity and permissions are in place and that the Oracle DB instance is accessible SQL Developer is installed by default with your Oracle client Optionally however see SQL Developer 1921 Downloads to download a standalone version of SQL Developer The configuration information used to create the Oracle DB instance will be used as the SQL Developer con figuration parameters that are required to connect to the Oracle DB instance 1 In the New/Select Database Connection dialog box choose Test to perform a test connection to the Oracle DB instance A status of Success indicates that the test connection has run and successfully connected to the Oracle DB instance At this point connectivity to both e1local and jde92poc has been proven using the default 64 bit drivers supplied with SQL Developer Note: The 64 bit driver is selected by default based on the order of the client drivers in the Servers environment variable 2 To check the deployment server path variables in File Explorer (assuming Microsoft Windows 10) right click This PC and choose Properties 3 On the Advanced tab choose Environment Variables 4 Locate the Path environment system variable in the list Path s ystem variable Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 14 This enables the observation of the Path environment system variable The following example shows the 64 bit binaries listed before the 32 bit binaries for Oracle C:\JDEdwards \E920_1\PLANNER\bin32;C: \JDEdwards \E920_1\system\bin32; C:\Oracle64db\E1Local\bin;C:\app\e1dbuser \product\1210\client_1 \b in;C:\ProgramData \Oracle\Java\javapath;%SystemRoot% \system32;%Syste mRoot%;%SystemRoot% \System32 \Wbem;%SYSTEMROOT% \System32 \WindowsPowe rShell\v10\;C:\ProgramFiles \Amazon\cfnbootstrap \;C:\Program Files\Amazon\AWSCLI\” 5 To ensure that the remainder of the installation process works it is critical that SQL*Plus works correctly ; specifically name resolution with tnsnamesora From the deployment server EC2 instance open a command window and enter the following command: tnsping ellocal The file used for tnsping is located in the C:\Oracle64db \E1Local\network\admin folder In this directory you’ll make changes to the tnsnamesora file; specifically configuration of the e1local database (64 bit installat ion) 6 This step relates to the 64 bit libraries not to the libraries that the JD Edwards EnterpriseOne deployment server code uses The JD Edwards EnterpriseOne deployment server code uses 32 bit executables and the tnsnamesora file on the client side to connect to databases (which are 64bit) For this example these files are located in C:\app\e1dbuser\product\1210\client_1\network\admin Ensure that the Oracle DB instance is in the tnsnamesora file in both locations (32bit and 64 bit) To proceed you must be able to log into SQL*Plus to the Oracle DB instance using tnsnamesora Installing the platform pack The platform pack is run from the deployment server connecting to a remote database To proceed you need the Oracle Platform Pack for Windows You can obtain it from https://edeliveryoraclecom with the appropriate MOS (My Oracle Support) login Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 15 In this section the installation directory is C:\software\windowsPlatformPack \install To in stall the platform pack: 1 To run the Java based installation program for the Oracle Platform Pack for Windows run setupexe from within the installation directory 2 Choose Next 3 Under Select Installation Type choose Database and then choose Next 4 Under Specify Home Destination > Destination Leave the Name field as the default Under Path choose where to locate the installer files based on the installation preferences This is a temporary location and you can remove these files after the database is populated After you enter the file path choose Next 5 Under Would you like to Install or Upgrade EnterpriseOne choose Install and then choose Next 6 Under Database Options enter database information: a Database type — Oracle b Database server — The database server name is not important and you can use the name of the deployment server (in this case jde92dep ) c Enter and confirm your password d Choose Next 7 Under Administration and End User Roles use the defaults and choose Next 8 A warning appears Ignore it and choose Next Ignore the Database Server name warning Configuration for the Oracle DB instance and a username and password are supplied on the form Unique string identifiers are provided for the tablespace directory (c:\tablespace001 ) and the Index tablespace directory (c:\indexspace001 ) These will be replaced at a later stage of the installation process 9 Choose Run Scripts Manually to defer the execution of the installation scripts Important : Should the installation s cripts run at this stage the installation will fail Choose Next The installation process will attempt to connect to jde92poc using the information you provided This connection must succeed for the installation to proceed The following figure indicates that the installation process was able to connect to the Oracle DB instance specified ( jde92poc ) Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 16 Installation process connected to the Oracle DB instance 10 Choose Install The installation process starts and creates a set of specific database installati on scripts for the options selected throughout the platform pack installation wizard When installation is complete instead of the default scripts the custom values you provided are configured Because you selected Run Scripts Manually the database is not loaded but scripts are created specifically for the current input parameters As the installation process proceeds you can view logging at C:\JDEdwardsPPack \E920_1 Modifying the default scripts After modifying the default scripts the post installati on wizard installation scripts are created; however it is assumed that they will run on the database server itself As a result you need to modify these scripts to ensure a seamless installation on the Oracle DB instance When you view the specified inst allation directory ( C:\JDEdwardsPPack \E920_1 ) you will see that a folder structure was created You will make the required modifications within this directory Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 17 Folder structure for the installation directory The modifications required to achieve a seamless installation are summarized as follows : • Change the dpump_dir1 entry in all scripts to DATA_PUMP_DIR The Data Pump files need to be moved from the various directories on the deployment server install media to t he DATA_PUMP_DIR directory on the RDS DB instance using DBMS_FILE_TRANSFERPUT_FILE You can also use the Amazon S3 integration feature now available with RD S Oracle to move the dump file For details see Integrating Amazon RDS for Oracle with Amazon S3 using S3_integration and Image X • Change the syntax of the CREATE TABLESPACE statements Amazon RDS supports Oracle Managed Files (OMF) only for data files log files and control files When creating data files and log files you cannot specify physical file names See Changing the Syntax of the CREATE TABLESPACE Statements in this document for additional details • Rename the pristine data dump file and the import data script Change the name of the pristine data dump file and also the import data script for the TEST environment and pristine environment (The standard scripts change the import DIR and you are going to change the filename) • Change the database grants Change the database grants to remove “create any directory” as this is not a grant that works on Amazon RDS See Changing the Database Grants in this document for additional details Throughout this process the updated scripts are located in the ORCL directory You can run these scripts at any time by executing the following command However this is the master script for the database installation and you should NOT run it at this stage Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 18 cmd> InstallOracleDatabaseBAT If throughout this process you make any mistakes or encounter failures run the following command This command completely unloads and drops any database components that were created by the installation script cmd> drop_dbbat You should back up all the scripts in the ORCL directory If required you can run the installer again to generate a set of new pristine scripts Create the JDE Installers standard data pump directories From SQL Developer connected to the Amazon RDS for Oracle database instance perform the following steps The Windows global search and replace commands were completed using notepad++ However you can use any text editor Changing dpump_dir1 Use the global search and replace for *sql and *bat files in t he c:\JDEdwardsPPack \E920_1\ORCL directory : • Replace dpump_dir_1 with DATA_PUMP_DIR • Replace log_dir1 with DATA_PUMP_DIR Find and replace the *sql and *bat files Amazon Web Services Installing JD Edwa rds EnterpriseOne on Amazon RDS for Oracle 19 Now create the datapump directories ‘ log_dir1’ and ‘dpump_dir1’ as shown: Sqldeveloper> exe c rdsadminrdsadmin_utilcreate_directory('log_dir1'); Sqldeveloper> exec rdsadminrdsadmin_utilcreate_directory('dpump_dir1'); • Confirmation messages such as anonymous block completed are displayed; you can safely ignore them • You can confirm that the directory was created by running the following SQL statement : SELECT directory_name directory_path FROM dba_directories; After replacing the sql and bat files the code output changes For example this code: Impdp %SYSADMIN_USER%/%SYSADMIN_PSSWD%@%CONNECT_STRING% DIRECTORY= dpump_dir1 DUMPFILE=RDBSPEC01DMPRDBSPEC02DMPRDBSPEC03DMPRDBSPEC04DMP LOGFILE= log_dir1 :Import_%USER%log TABLE_EXISTS_ACTION=TR UNCATE EXCLUDE=USER Becomes this code: Impdp %SYSADMIN_USER%/%SYSADMIN_PSSWD%@%CONNECT_STRING% DIRECTORY=DATA_PUMP_DIR DUMPFILE=RDBSPEC01DMPRDBSPEC02DMPRDBSPEC03DMPRDBSPEC04DMP LOGFILE=DATA_PUMP_DIR:Import_%USER%log TABLE_EXISTS_ACTION=TRUNCATE EXC LUDE=USER Changing the syntax of the CREATE TABLESPACE statements By default pristine create tablespace statements found in the files such as crtabsp_cont crtabsp_shnt and crtabsp_envnt look like the following example CREATE TABLESPACE &&PATH&&RELEASEt Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 20 logging datafile '&&TABLE_PATH \&&PATH&&RELEASEt01dbf' size 1500M '&&TABLE_PATH \&&PATH&&RELEASEt02dbf' size 1500M autoextend on next 60M maxsize 5000M extent management local autoallocate segment space management auto online; These statements must be modified to reflect the following example CREATE bigfile TABLESPACE &&PATH&&RELEASEt logging Datafile SIZE 1500M AUTOEXTEND ON MAXSIZE 5G; Note: The next step of applying updates is either a manual or a scripted task due to differences in many of the tablespaces The following updates must be applied crtabsp_cont create bigfile tablespace &&PATH&&RELEASEt logging datafile size 1500M AUTOEXTEND ON MAXSIZE 5G ; create bigfile tablespace &&PATH&&RELEASEi logging datafile size 1500M AUTOEXTEND ON MAXSIZE 5G ; crtabsp_shnt create bigfile tablespace sy&&RELEASEt logging datafile size 250M AUTOEXTEND ON MAXSIZE 750M; create bigfile tablespace sy&&RELEASEi logging datafile size 100M AUT OEXTEND ON MAXSIZE 750M; create bigfile tablespace svm&&RELEASEt logging datafile size 10M AUTOEXTEND ON MAXSIZE 150M; Amazon Web Services Installing JD Edwards EnterpriseOn e on Amazon RDS for Oracle 21 create bigfile tablespace svm&&RELEASEi logging datafile size 10M AUTOEXTEND ON MAXSIZE 150M; create bigfile tablespace ol&&RELEASEt logging datafile size 250M AUTOEXTEND ON MAXSIZE 350M; create bigfile tablespace ol&&RELEASEi logging datafile size 100M AUTOEXTEND ON MAXSIZE 150M; create bigfile tablespace dd&&RELEASEt logging datafile size 350M AUTOEXTEND ON MAXSIZE 450M; create bigfile tablespace dd&&RELEASEi logging datafile size 125M AUTOEXTEND ON MAXSIZE 750M; crtabsp_envnt create bigfile tablespace &&ENV_OWNERctli logging datafile size 1000M AUTOEXTEND ON MAXSIZE 1500M; create bigfile tablespace &&ENV_OWNERctlt logging datafile size 1000M AUTOEXTEND ON MAXSIZE 1500M; create bigfile tablespace &&ENV_OWNERdtai logging datafile size 1000M AUTOEXTEND ON MAXSIZE 4500M; create bigfile tablespace &&ENV_OWNERdtat logging datafile size 100 0M AUTOEXTEND ON MAXSIZE 4500M; Renaming the pristine data dump file and the Import data script These changes are made to ORCL\InstallOracleDatabaseBAT You are changing DTA to DDTA to load the DEMO data as opposed to the empty tables Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 22 approx line 363 PRISTINE @REM @set USER=%PS_DTA_USER% @set PSSWD=%PS_DTA_PSWD% @set FROMUSER=%PS_DTA_FROMUSER% @set LOAD_TYPE=DDTA @set JDE_DTA=%DATABASE_INSTALL_PATH% \demodta @echo ************************************************************ @echo create and load %USER% Business Data Tables @echo @echo "Calling Load for %PS_DTA_USER% load type DTA" >> logs\OracleStatustxt @echo "InstallOracleDatabase:#6 call load %PS_DTA_USER% DTA T STDTA @callLoadbat @if ERRORLEVEL 4 ( @goto abend approx line 554 – TESTDTA @rem @if "%RUN_MODE"=="INSTALL"( @set user=%DV_DTA_USER% @set PSSWD=%DV_DTA_PSWD% @set FROMUSER=%PS_DTA_FROMUSER% @set LOAD_TYPE=DDTA @set JDE_DTA=%DATABASE_INSTALL_PATH% \demodta @echo ************************************************************ @echo create and load %DV_DTA_USER% Business Data Tables @echo @echo "Calling Load for %DV_DTA_USER%load type DTA" >>logs\OracleStatustxt @call Loadbat @if ERRORLEVEL 4( @goto abend Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 23 Changing the database grants Create_dirsql has the following statement that you need to change Amazon RDS for Oracle does not support creating directories on the RDS instance so you must remove this statement Before grant create session create table create view create any directory select any dictionary to jde_role; After grant create session create table create view select any dictionary to jde_role; Advanced configuration Start an SQL Developer session to the RDS DB instance and log in as the administrative user ( jde92pocmaster ) Run the following SQL command SELECT directory_name directory_path FROM dba_directories ; This is the result: DIRECTORY_NAME DIRECTORY_PATH BDUMP /rdsdbdata/log/trace ADUMP /rdsdbdata/log/audit OPATCH_LOG_DIR /rdsdbbin/oracle/QOpatch OPATCH_SCRIPT_DIR /rdsdbbin/oracle/QOpatch DATA_PUMP_DIR /rdsdbdata /datapump OPATCH_INST_DIR /rdsdbbin/oracle/Opatch LOG_DIR1 /rdsdbdata/userdirs/01 DPUMP_DIR1 /rdsdbdata/userdirs/02 To see files in DATA_PUMP_DIR1 directory run the following Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 24 SELECT * FROM TABLE (RDSADMINRDS_FILE_UTILLISTDIR (‘DATA_PUMP_DIR 1’))ORDER BY mtime; SELECT * FROM TABLE (RDSADMINRDS_FILE_UTIL LISTDIR(‘LOG_DIR1’ )) ORDER BY mtime; The following command deletes a single file named Import_TESTCTL_CTLlog from the LOG_DIR1 directory stored on the Oracle DB instance exec utl_fileremove(‘LOG_DIR1’’Import_TESTCTL_CTLlog’); exec utl_filefremove('DATA_PUMP_DIR''Import_TESTCTL_CTLlog'); The DATA_PUMP_DIR is used in the following SQL command to generate deletes for all log files in LOG_DIR1DATA_PUMP_DIR SELECT ’exec utl_filefremove (‘DATA_PUMP_DIR ’’’’’|| filename|| ‘’’);’ FROM TABLE (RDSMDMINRDS_FILE_UTILLISTDIR (‘LOG_DIR1 ’)) WHERE filename LIKE ‘%log’ ORDER BY mtime; Moving DMP files When connected to e1local on the deployment server using SQL Developer run the following commands DROP DATABASE LINK jde92poc; CREATE DATABASE LINK jde92poc CONNECT TO jde92pocmaster IDENTIFIED BY "aws_Poc_Password" USING'(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=jde92pocjde92 loca l)(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=jde92po c)))'; SELECT directory_name directory_path FROM dba_directories; 'C:\Oracle64db \admin\e1local\dpdump'; These commands create the following: • A new database directory to read the dump files from the deployment server Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 25 • A database link to the Amazon RDS for Oracle DB instance to be a conduit to move the dump files from the deployment server to the Oracle DB instance Copying DMP files from an ORCL directory to a specified DATA_PUMP directory Locate *dmp files in the ORCL directory and copy them to C:\Oracle64db \admin\e1local\dpdump as defined in the previous e1local database directory ( DATA_PUMP_SRM ) You'll see that there are two DUMP_DTADMP files in the find results The one in demodta must be renamed DUMP_DDTADMP It’s important to name it exactly as specified because there are associated changes in the import scripts DUMP_DTADMP comes from ORCL\proddta The reason for this renaming is that one of the dump files (the larger one ) is for DEMO data which is imported into TESTDTA and PRISTINE while the smaller file (DUMP_DTADMP ) does not contain any data – just table and index structures Now all of the *dmp files that must be copied into the Oracle DB instance are in an e1local directory named DATA_PUMP_SRM It’s time to move these files to the RDS DB instance directory named DPUMP_DIR1 that you created The following figure shows how this directory looks on the deployment server DPUMP_DIR1 directory on deployment server In the Appendix you will find a script you can use to copy the dmp files from the deployment server to the RDS DB instance via a database link Run this script from SQL Developer connected to the e1local database Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 26 When these commands finish successfully you can run the following command against the Oracle DB instance ( jde92poc ) to ensure that the files have arrived SELECT substr(filename130)type filesize MTIME FROM TABLE (RDSADMINRDS_FILE_UTILLISTDIR (‘DPUMP_DIR1')) ORDER BY mtime; The following output indicates that the files were transferred correctly A screens hot that shows the files were transferred correctly Confirming files are transferred : create bigfile tablespace &&ENV_OWNERctli logging datafile size 1000M AUTOEXTEND ON MAXSIZE 1500M ; create bigfile tablespace &&ENV_OWNERctlt logging datafile size 1000M AUTOEXTEND ON MAXSIZE 1500M ; create bigfile tablespace &&ENV_OWNERdtai logging datafile size 1000M AUTOEXTEND ON MAXSIZE 4500M ; create bigfile tablespace &&ENV_OWNERdtat logging datafile size 1000M AUTOEXTEND ON MAXSIZE 4500M Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 27 Change the database grants to not include ‘create any directory’ Because Amazon RDS Oracle does not support creating directories on the RDS instance the creation of directories in the installation scripts must be done manually You do this by using the AW S custom function rdsadminrdsadmin_utilcreate_directory Grants before Grant create session create table create view create any directory select any dictionary to jde_role; Grants after Grant create session create table create view select any dictionary to jde_role; Running the installer At this point you have made all the modifications that are required to facilitate the smooth installation of JD Edwards EnterpriseOne If you encounter any issues be sure that anything you defined in the installation wizard is also defined in ORCL\ORCL_SETBAT If you forget items such as passwords or settings you can retrieve them from this file However be sure to delete this file wh en the installation is complete Open a command window on the deployment server and run InstallOracleDatabasebat from the C: \JDEdwardsPPack \E920_1\ORCL directory You can use C:\JDEdwardsPPack \E920_1\ORCL\logs to track progress and view the script output You cannot view the output of the data pump operations because they are not multiples of the block size of the database When the installation is complete you should see that the database is populated The following screenshot is from Oracle SQL Develop er and shows you the properties of the target database All JD Edwards EnterpriseOne tablespaces now have space allocated and tables created Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 28 Properties of the target database You’ve now completed all the tasks for installing JD Edwards EnterpriseOne on the Amazon RDS Oracle DB instance The following steps enable you to verify that you can connect to the populated instance Logging into JD Edwards EnterpriseOne on the deployment server 1 Click the application launch icon to start JD Edwards EnterpriseOne The JD Edwards EnterpriseOne login screen is displayed 2 Enter your UserID and password 3 For Environment enter DV920 4 For Role enter *ALL Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 29 Logging in to DV920 for testing 5 Log out and then log back in to the jdeplan environment and continue with the standard installation Because there are no further deviations from a standard installation beyond this point you can proceed to create an installation plan and run the installation workbench Follow the instructions in section 5 of the JD Edwards EnterpriseOne installation process “ Working with Installation Planner for an Install ” Validation and testing The s uccessful completion of the installation workbench will give you confidence that the Amazon RDS Oracle database installation is working Proceeding to install web servers and enterprise servers and connecting them to the Amazon RDS for Oracle DB instance a re some of the remaining installation steps Remember to delete the dmp files on the Amazon RDS instance to ensure that they do not contribute to the amount of storage you are using on the Amazon RDS instance Any files stored in database directories con tribute to the space you are using in the Amazon RDS instance Use the following statement to build the commands you need to run to delete the dmp files Run this statement only when you know that your installation succeeded SELECT 'exec utl_filefremove(''DPUMP_DIR1'''''||filename|| ''');' FROM table(RDSADMINRDS_FILE_UTILLISTDIR('DPUMP_DIR1')) Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 30 WHERE filename LIKE '%DMP' ORDER BY mtime; Running on Amazon RDS for Oracle Enterprise Edition This paper walks through the implementation of J D Edwards on Amazon RDS for Oracle standard edition only However if you are running or plan to run on Amazon RDS for Oracle Enterprise Edition there are some additional features you can leverage in the areas of high availability and security • Flashback Table recovers tables to a specific point in time This can be helpful when a logical corruption is limited to one table or a set of tables instead o f to the entire database At the time of this publication the Flashback Database feature is available only on self managed Oracle databased on Amazon EC2 and not in Amazon RDS for Ora cle • Transparent Data Encryption (TDE) protects data at rest for customers who have purchased the Oracle Advanced Security option TDE provides transparent encryption of stored data to support your privacy and compliance efforts Applications do not have to be modified and will continue to work as before Data is automatically encrypted before it is written to disk and autom atically decrypted when reading from storage Key management is built in which eliminates the task of creating managing and securing encryption keys You can choose to encrypt tablespaces or specific table columns using industry standard encryption algorithms including Advanced Encryption Standard (AES) and Data Encryption Standard (Triple DES) • Oracle Virtual Private Database (VPD) enables you to create security polici es to control database access at the row and column level Essentially Oracle VPD adds a dynamic WHERE clause to an SQL statement that is issued against the table view or synonym to which an Oracle VPD security policy was applied Oracle VPD enforces se curity to a fine level of granularity directly on database tables views or synonyms Because you attach security policies directly to these database objects and the policies are automatically applied whenever a user accesses data there is no way to bypa ss security • Fine Grained Auditing (FGA) can be understood as policy based auditing It enables you to specify the conditions necessary to generate an audit record FGA p olicies are programmatically bound to a table or view They allow you to audit an event only when conditions that you define are true; for example only if a specific column has been selected or updated Because every access to a table is not always record ed this creates more meaningful audit trails This can be critical given the often commercially sensitive nature of the data retained in the JD Edwards EnterpriseOne backend databases Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 31 As dbz1d instances class delivers a sustained all core frequency of u p to 40 GHz the fastest of any cloud instance; this can also reduce the costs for customers using core based licensing cost while running enterprise edition since they will need to have fewer cores now Conclusion This whitepaper described many of the ca pabilities and advantages of using AWS and Amazon RDS as the foundation for installing the JD Edwards EnterpriseOne application Specifically this whitepaper focused on a way of configuring Amazon RDS for Oracle as the underlying database for the JD Edwar ds EnterpriseOne application The whitepaper articulated all the steps for installing the JD Edwards EnterpriseOne application and the steps required to set up an Amazon RDS Oracle DB instance Having JD Edwards EnterpriseOne and Amazon RDS for Oracle run ning in the AWS Cloud enables you to enjoy the advantages of simple deployment high availability security scalability and many additional services supported by Amazon RDS and AWS Appendix: Dumping deployment service to RDS The following code snippet shows example usage of DBMS_FILE_TRANSFER package to transfer the datapump dumpfile for deployment service to RDS Oracle Begin DBMS_FILE_TRANSFERPUT_FILE( source_directory_object=> 'DATA_PUMP_SRM' source_file_name=> 'DUMP_CTLDMP' destination_directory_object=> 'DPUMP_DIR1' destination_file_name=> 'DUMP_CTLDMP' destination_database=> 'jde92poc' ); DBMS_FILE_TRANSFERPUT_FILE( source_directory_object=> 'DATA_PUMP_SRM' source_file_name=> 'RDBSPEC01DMP' destination_directory_ob ject=> 'DPUMP_DIR1' destination_file_name=> 'RDBSPEC01DMP' destination_database=> 'jde92poc' ); DBMS_FILE_TRANSFERPUT_FILE( source_directory_object=> 'DATA_PUMP_SRM' source_file_name=> 'RDBSPEC02DMP' destination_directory_object=> 'DPUMP_DIR1' destination_file_name=> 'RDBSPEC02DMP' Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 32 destination_database=> 'jde92poc' ); DBMS_FILE_TRANSFERPUT_FILE( source_directory_object=> 'DATA_PUMP_SRM' source_file_name=> 'RDBSPEC03DMP' destination_directory_object=> 'DPUMP_DIR1' destination_file_name=> ' RDBSPEC03DMP' destination_database=> 'jde92poc' ); DBMS_FILE_TRANSFERPUT_FILE( source_directory_object=> 'DATA_PUMP_SRM' source_file_name=> 'RDBSPEC04DMP' destination_directory_object=> 'DPUMP_DIR1' destination_file_name=> 'RDBSPEC04DMP' destination_database=> 'jde92poc' ); DBMS_FILE_TRANSFERPUT_FILE( source_directory_object=> 'DATA_PUMP_SRM' source_file_name=> 'DUMP_DTADMP' destination_directory_object=> 'DPUMP_DIR1' destination_file_name=> 'DUMP_DTADMP' destination_database=> 'jde92poc' ); DBMS_FILE_TRANSFERPUT_FILE( source_directory_object=> 'DATA_PUMP_SRM' source_file_name=> 'DUMP_DDDMP' destination_directory_object=> 'DPUMP_DIR1' destination_file_name=> 'DUMP_DDDMP' destination_database=> 'jde92poc' ); DBMS_FILE_TRANSFERPUT_FILE( source_directory_object=> 'DATA_PUMP_SRM' source_file_name=> 'DUMP_OLDMP' destination_directory_object=> 'DPUMP_DIR1' destination_file_name=> 'DUMP_OLDMP' destination_database=> 'jde 92poc' ); DBMS_FILE_TRANSFERPUT_FILE( source_directory_object=> 'DATA_PUMP_SRM' source_file_name=> 'DUMP_SYDMP' destination_directory_object=> 'DPUMP_DIR1' destination_file_name=> 'DUMP_SYDMP' destination_database=> 'jde92poc' ); Amazon Web Services Installing JD Edwards EnterpriseOne on Amazon RDS for Oracle 33 DBMS_FILE_TRANSFERPUT_FILE( source_directory_object=> 'DATA_PUMP_SRM' source_file_name=> 'DUMP_DDTADMP' destination_directory_object=> 'DPUMP_DIR1' destination_file_name=> 'DUMP_DDTADMP' destination_database=> 'jde92poc' ); END; Contributors Contributors to this document include: •Marc Teichtahl AWS Solutions Architect •Shannon Moir Lead Engineer at Myriad IT •Saikat Banerjee Database Solutions Architect AWS Document revisions Date Description March 24 2021 Document review and addition of various new RDS Oracle capabilities Dec 2016 First publication
General
Security_at_Scale_Governance_in_AWS
ArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 1 of 16 Security at Scale: Governance in AWS Analysis of AWS features that can alleviate onpremise challenges October 2015 This paper has been archived For the most recent security content see Best Practices for Security Identity and Compliance at https://awsamazoncom/architecture/securityidentitycomplianceArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 2 of 16 Table of C ontents Abstract 3 Introduction 3 Manage IT resources 4 Manage IT assets 4 Control IT costs 5 Manage IT security 6 Control physical access to IT resources 6 Control logical access to IT resources 7 Secure IT resources 8 Manage logging around IT resources 10 Manage IT performance 11 Monitor and respond to events 11 Achieve resiliency 12 ServiceGovernance Feature Index 13 Conclusion 15 References and Further Reading 16 ArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 3 of 16 Abstract You can run nearly anything on AWS that you would run on onpremise: websites applications databases mobile apps email campaigns distributed data analysis media storage and private networks The services AWS provides are designed to work together so that you can build complete solutions An often overlooked benefit of migrating workloads to AWS is the ability to achieve a higher level of security at scale by utilizing the many governanceenabling features offered For the same reasons that delivering infrastructure in the cloud has benefits over onpremise delivery cloudbased governance offers a lower cost of entry easier operations and improved agility by providing more oversight security control and central automation This paper describes how you can achieve a high level of governance of your IT resources using AWS In conjunction with the AWS Risk and Compliance whitepaper and the Auditing Security Checklist whitepaper this paper can help you understand the security and governance features built in to AWS services so you can incorporate security benefits and best practices in building your integrated environment with AWS Introduction Industry and regulatory bodies have created a complex array of new and legacy laws and regulation s mandating a wide range of security and organizational governance measures As such research firms estimate that many companies are spending as much as 75% of their IT dollars to manage infrastructure and spending only 25% of their IT dollars on IT aspects that are directly related to the business their companies are providing One of the key ways to improve this metric is to more efficiently address the backend IT governance requirements An easy and effective way to do that is by leveraging AWS’s out ofthebox governance features While AWS offers a variety of IT governanceenabling features it can be hard to decide how to start and what to implement This paper looks at the common IT governance domains by providing the use case ( or the on premise challenge) the AWS enabling features and the associated governance value propositions of using those features This document is designed to help you achieve the objectives of each IT governance domain1 This paper follows the approach of the major domains of comm onlyimplemented IT governance frameworks (eg CoBIT ITIL COSO CMMI etc) ; however the IT governance domains through which the paper is organized are generic to allow any customer to use it to evaluate the governance features of using AWS versus what can be done with your onpremise resources and tools The following IT governance domains are discussed through a “usecase ” approach : I want to better 1 While this paper features a robust list of the governanceenabling features because new features are consistently being developed it is not inclusive of all the features available Additional tutorials developer tools documentation can be found at http://awsamazoncom/resources/ Manage my IT resources Manage my IT assets Control my IT costsManage my IT security Control logical access Control physical access Secure IT resources Log IT activitiesManage my IT performance Monitor IT events Achieve IT resiliencyArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 4 of 16 Manage IT resources Manage IT assets Identifying and managing your IT assets is the first step in effective IT governance IT assets can range from the high end routers switches servers hosts and firewalls to the applications services operating systems and other software assets deployed in your network An updated inventory of hardware and software assets is vital for decisions on upgrades and purchases tracking warranty status or for troubleshooting and security reasons It is becoming a business imperative to have an accurate asset inventory listing to provide on demand views and comprehensive reports Moreover comprehensive a sset inventories are specifically required for certain compliance regulations For example FISMA SOX PCI DSS and HIPAA all mandate accurate asset inventories as a part of their requirements However the nature of pieced together onpremise resources ca n make maintaining this listing arduous at best and impossible at worst Often organizations have to employ third party solutions to enable automation of the asset inventory listing and even then it is not always possible to see a detailed inventory of every type of asset on a single console Using AWS there are multiple features available for you to quickly and easily obtain an accurate inventory of your AWS IT resources Those features associated ‘how to’ guidance and links to learn more about the feature are provided below: AWS governance enabling feature How you get security at scale Account Activity page Provides a sum marized listing of IT resources by detailing usage of each service by region Learn more Amazon Glacier vault inventory Provides Glacier data inventory by showing all IT resources in Glacier Learn more AWS CloudHSM Provides virtual and physical control over encryption keys by providing customer dedicated HSMs for key storage Learn more AWS Data Pipeline Task Runner Provides automated processing of tasks by polling the AWS Data Pipeline for tasks and then performing and reporting status on those tasks Learn more AWS Management Console Provides a real time inventory of assets and data by showing all IT resources running in AWS by service Learn more AWS Storage Gateway APIs Provide the capability to programmatically inventory assets and data by programming interfaces tools and scripts to manage reso urces Learn more ArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 5 of 16 Control IT c osts You can better control your IT costs by obtaining resources in the most cost effective way by understand ing the costs of your IT services However managing and tracking the costs and ROI associated with IT resource spend onpremise can be difficult and inaccurate because the calculations are so complex; capacity planning predictions of use purchasing costs depreciation cost of capital and facilities costs are just a few aspects that make total cost of ownership difficult to calculate Using AWS there are multiple features available for you to easily and accurately understand and control your IT resource costs U sing AWS you can achieve cost savings of up to 80% compared to the equ ivalent on premises deployments2 Those features associated ‘how to’ guidance and links to learn more about the feature are provided below: AWS governance enabling feature How you get security at scale Account Activity page Provides an anytime view of spending on IT resources by showing resources used by service Learn more Amazon EC2 i dempotency instance launch Helps p revent erroneous launch of resources and incurrence of additional costs by preventing timeouts or connection errors from launching additional instances Learn more Amazon EC2 r esource tagging Provides association between resource expenditures and business units by applying custom searchable labels to compute resources Learn more AWS Account Billing Provides easy touse billing features that help you monitor and pay your bill by detailing resources used and associated actual compute costs incurred Learn more AWS Management Console Provides a one stop shop view for cost drivers by showing all IT resources running in AWS by service including actual costs and run rate Learn more AWS service pricing Provides definitive awareness of AWS IT resource rates by providing pricing for each AWS product and specific pricing characteristics Learn more AWS Trusted Advisor Helps o ptimize cost of IT resources by identifying unused and idle resources Learn more Billing Al arms Provides proactive alerts on IT resource spend by sending notifications of spending activity Learn more Consolidated billing Provides centralized cost control and cross account cost visibility by combining multiple AWS accounts into one bill Learn more 2 See the Total Cost of Ownership Whitepaper for more information on overall cost savings using AWS ArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 6 of 16 Payasyougo pricing Provides computing resources and services that you can use to build applications within minutes at pay asyougo pricing with no up front purchase costs or ongoing maintenance costs by automatically scaling into multiple servers when demand for your application increases Learn more Manage IT security Control p hysical access to IT resources Physical access management is a key component of IT governance programs In addition to the locks security alarms access controls and surveillance videos that define the traditional components of physical security the electronic controls over physical access are also paramount to effective physical security The traditional physical security industry is in rapid transition and areas of specialization are surfacing making physical security vastly more complex As the onpremise physical security considerations and controls have become more complex there is an increased need for uniquely qualified and specialized IT security professionals to manage the significant effort required to achieve effective physical control around access credentials for cards/card readers controllers and system servers for hosting data around physical security Using AWS you can easily and effectively outsource controls related to physical security of your AWS infrastructure to AWS specialists with the skillsets and resources needed to secure the physical environment AWS has multiple different independent auditors validate the data center physical security throughout the year attesting to the design and detailed testing of the effectiveness of our physical security controls Learn more about the AWS audit programs and associated physical security controls below: AWS governance enabling feature How you get security at scale AWS SOC 1 physical access controls Provides transparency into the controls in place that prevent unauthorized access to data centers Controls are properly designed tested and audited by an independent audit firm Learn more AWS SOC 2 Security physical access controls Provides transparency into the controls in place that p revent unauthorized access to data centers Controls are properly designed tested and audited by an independent audit firm Learn more AWS PCI DSS physical access controls Provides transparency into the controls in place that prevent unauthorized access to data centers relevant to the Payment Card Industry Data Security Standard Controls are properly designed tested and audited by an independent audit firm Learn more AWS ISO 27001 physical access controls Provides transparency into the controls and processes in place that prevent unauthorized access to data centers relevant to the ISO 27002 security best practice s tandard Controls are properly designed tested and audited by an independent audit firm Learn more ArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 7 of 16 AWS FedRAMP physical access controls Provides transparency into the controls and processes in place that prevent unauthorized access to data centers relevant to the NIST 800 53 best practice standard Controls are properly des igned tested and audited by a government accredited independent a udit firm Learn more Control logical a ccess to IT resources One of the primary objectives of IT governance is to effectively manage logical access to computer systems and data However many organizations are struggling to scale their onpremise solutions to meet the growing and continuously changing number of considerations and complexities around logical access including the ability to establish a rule of least privilege manage permissions to resources address changes in roles and information needs and the growth of sensitive data Major persistent challenges for managing logical access in an onpremise environment are providing users with access based on:  Role (ie internal users contractors outsiders partners etc)  Data classification (ie confidential internal use only private public etc)  Data type (ie credentials personal data contact information workrelated data digital certificates cognitive passwords etc) There are multiple control features AWS offers you effectively manage your logical access based on a matrix of use cases anchored in least privilege Those features associated ‘how to’ guidance and links to learn more about the feature are provided below: AWS governance enabling feature How you get security at scale Amazon S3 Access Control Lists (ACLs) Provides central permissions and conditions by adding specific conditions to control how a user can use AWS such as time of day their originating IP address whether they are using SSL or whether they have authenticated with a Multi Factor Authentication device Learn more here and here Amazon S3 Bucket Policies Provides the ability to create conditional rules for managing access to their buckets and objects by allowing you to restrict access based on account as well as request based attributes such as HTTP referrer and IP address Learn more Amazon S3 Query String Authentication Provides the ability to give HTTP or browser access to resources that would normally require authentication by using the signature in the query string to secure the request Learn more AWS CloudTrail Provides logging of API or console actions (eg log if someone changes a bucket policy stops and instance etc) allowing advanced monitor ing capabilities Learn more AWS IAM Multi Fact or Authentication (MFA) Provides enforcement of MFA across all resources by requiring a token to sign in and access resources Learn more ArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 8 of 16 AWS IAM password policy Provides the ability to manage the quality and controls around your users’ passwords by allowing you to set a password policy for the passwords used by IAM users that specifies that passwords must be of a certain length must include a selection of charact ers etc Learn more AWS IAM Permissions Provides the ability to easily manage permissions by letting you specify who has access to AWS resources and wha t actions they can perform on those resources Learn more AWS IAM Policies Enables you to achieve detailed least privilege access management by allowing you to create multiple users within your AWS account assign them security credentials and manage their permissions Learn more AWS IAM Roles Provides the ability to temporarily delegate access to users or services that normally don't have access to your AWS resources by defining a set of permissions to access the resources that a user or service needs Learn more AWS Trusted Advisor Provides automated security management assessment by identifying and escalating possible security and permission issues Learn more Secure IT resources Securing IT resources is the cornerstone of IT governance programs However for onpremise environments there is a litany of security steps that must be taken when a new server is brought online For example firewall and access control policies must be updated the newly created server image must be verified to be in compliance with security policy and all software packages have to be up to date Unless these security tasks are automated and delivered in a way that can keep up with the highly dynamic needs of the business organizations working solely with traditional governance approaches will either cause users to work around the security controls or will cause costly delays for the business AWS provides multiple security features that enable you to easily and effectively secure your IT resources Those features associated ‘how to’ guidance and links to learn more about the feature are provided below: AWS governance enabling feature How you get security at scale Amazon Linux AMIs Provides the ability to c onsistently deploy a " gold" (hardened) image by developing a private image to be used in all instance deployments Learn more Amazon EC2 Dedicated Instances Provides a private isolated virtual network and ensures that your Amazon EC2 compute instances are be isolated at the hardware level and launching these instances into a VPC Learn more Amazon EC2 instance launch wizard Enables consi stent launch process by providing restrictions around machine images available when launching instances Learn more ArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 9 of 16 Amazon EC2 security groups Provides granular control over inbound and outbound traffic by acting as a firewall that controls the traffic for one or more instances Learn more Amazon Glacier archives Provides inexpensive long term storage service for securing and durably storage for data archiving and backup using AES 256 bit encryption by default Learn more Amazon S3 Client Side Encryption Provides th e ability to encrypt your data before sending it to Amazon S3 by building your own library that encrypts your objects data on the client side before uploading it to Amazon S3 The AWS SDK for Java can also automatically encrypt your data before uploading i t to Amazon S3 Learn more Amazon S3 Server Side Encryption Provides encryption of objects at rest and keys managed by AWS by using AES 256 bit encryption for Amazon S3 data Learn more Amazon VPC Provides a virtual network closely resembling a traditional network that is operated on premise but with benefits of usi ng the scalable infrastructure of AWS Allows you to create logically isolated section s of AWS where you can launch AWS resources in a virtual network that you define Learn more Amazon VPC logical isolation Provides virtual isolation of resources by allowing machine images to be isolated from other networked resources Lear n more Amazon VPC network ACLs Provides ‘firewall type’ isolation for associated subnets by controlling inbound and outbound traffic at the subnet level Learn more Amazon VPC private IP address es Helps p rotect private IP addresses from internet exposure by routing their traffic through a Network Address Translation (NAT) instance in a public subnet Learn more Amazon VPC security groups Provides ‘firewall type’ isolation for associated Amazon EC2 instances by controlling inbound and outbound traffic at the instance level Learn more AWS CloudFormation templates Provides the ability to c onsistently deploy a specific machine image along with other resources and conf igurations by provisioning infrastructure with scripts Learn more AWS Direct Connect Removes need for a publi c Internet connection to AWS by establishing a dedicated network connection from your premises to AWS ’ datacenter Learn more Onpremise hardware/software VPN connections Provides granular control over network security by allowing secure connectio ns from existing network to AWS Learn more ArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 10 of 16 Virtual private gateways Provides granular control over network security by providing a way to create a Hardware VPN Connection to your VPC Learn more Manage logging around IT resources A major enabler of securing IT is the logging around IT resources Logging is critically important to IT governance for a variety of use cases including but not limited to: detecting/tracking suspicious behavior supporting forensic analysis meeting compliance requirements supporting IT/networking maintenance and operations managing/reducing IT security costs monitoring service levels and supporting internal business processes Organizations are increasingly dependent on effective log management to support core governance functions including cost management service level and line ofbusiness application monitoring and other IT security and compliance focused activities The SANS Log Management Survey consistently shows that organizations are continuously seeking more uses from their logs but are encountering friction in their ability to achieve that use cases using onpremise resources to collect and analyze those logs With more log types to collect and analyze from different IT resources organizations are challenged by the manual overhead associated with normalizing log data that is in widely different formats as well as with the searching correlating and reporting functionalities Log management is a key capability for security monitoring compliance and effective decisionmaking for the tens or hundreds ofthousands of activities each day Using AWS there are multiple logging features that enable you to effectively log and track the use of your IT resources Those features associated ‘how to’ guidance and links to learn more about the feature are provided below: AWS governance enabling feature How you get security at scale Amazon CloudFront access log s Provides log files with information about end user access to your objects Logs can be distributed directly to a specific Amazon S3 bucket Learn more Amazon RDS database logs Provides a way to monitor a number of log files generated by your Amazon RDS DB Instances Used to diagnose trouble shoot and fix database configuration or performance issues Learn more Amazon S3 Object Expiration Provides automated log expiration by schedul ing removal of objects after a defined time period Learn more Amazon S3 server access logs Provides logs of access requests with details about th e request such as the request type the resource with which the request was made and the time and date that the request was processed Learn more AWS CloudTrail Provides log s of security actions done via the AWS Management Console or APIs Learn more ArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 11 of 16 Manage IT performance Monitor and respond to event s IT performance management and monitoring has become a strategically important part of any IT governance program IT monitoring is an essential element of governance that allows you to prevent detect and correct IT issues that may impact performance and/or security The key governance challenge in onpremise environments around IT performance management is that you are faced with multiple monitoring systems to manage every layer of your IT resources and the mix of proprietary management tools and IT processes results in a systemic complexity that can at best slow response times and at worst impact the effectiveness of your IT performance monitoring and management Moreover the increasing complexity and sophistication of security threats mean that event monitoring and response capabilities need to continuously and rapidly evolve to address emerging threats As such onpremise performance management is continuously faced with growing challenges around infrastructure procurement scalability ability to simulate test conditions across multiple geographies etc Using AWS there are multiple monitoring features that enable you to easily and effectively monitor and manage your IT resources Those features associated ‘how to’ guidance and links to learn more about the feature are provided below: AWS governance enabling feature How you get security at scale Amazon Cloud Watch Provides statistical data you can use to view analyze and set alarm s on the operational behavior of your instances These metrics include CPU utilization network traffic I/O and latency Learn more Amazon Cloud Watch alarms Provides consistent alarming for critical events by providing custom metrics alarms and notifications for event s Learn more Amazon EC2 i nstance status Provides instance status checks that summarize results of automated tests and provides information about c ertain acti vities that are scheduled for your instances Uses automated checks to detect whether specific issues are affecting your instances Learn more Amazon Incident Management Team Provides continuous incident detection monitoring and management with 24 7365 staff operators to support detection diagnostics and resolution of certain security events Learn more Amazon S3 TCP selective acknowledgement Provides the ability to improve recovery time after a large number of packet losses Learn more Amazon Simple Notification Service Provides consistent alarming for critical events by managing the delivery of messages to subscribing endpoints or clients Learn more AWS Elastic Beanstalk Provides ability to monitor application deployment details of capacity provisioning load balancing auto scaling and application health monitoring Learn more Elastic Load Balancing Provides the ability to automatically distribute your incoming application traffic across multiple Amazon EC2 instances by detecting ArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 12 of 16 over utilized instances and rerouting traffic to underutilized instances Learn more Achieve resiliency Data protection and disaster recovery planning should be a priority component of IT governance for all organizations Arguably the value of DR is not in question; every organization is concerned about its ability to get back up and running after an event or disaster But implementing governance around IT resource resiliency can be expensive and complex as well as tedious and timeconsuming Organizations are faced with a growing number of events that can cause unplanned downtime and operational blockers These events can be caused by technical problems (eg viruses data corruption human error etc) or natural phenomena (eg fires floods power failures weatherrelated outages etc) As such organizations are faced with increasing costs and complexity in planning testing and operating onpremise failover sites because of continual data growth In the face of these challenges cloud computing’s server virtualization enables the quality resiliency programs to be feasible and costeffective Using AWS there are multiple features that enable you to easily and effectively achieve resiliency for your IT resources Those features associated ‘how to’ guidance and links to learn more about the feature are provided below: AWS governance enabling feature How you get security at scale Amazon EBS snapshots Provides highly available highly reliable predictable storage volumes with incremental point in time backup control of server data Learn more Amazon RDS Multi AZ Depl oyments Provides the ability to safeguard your data in the event with automated availability controls homogenous resilient architecture Learn more AWS Import/Export Provides the ability to move massive amounts of data locally by creating import and export jobs quickly using Amazon’s high speed internal network Learn more AWS Storage Gateway Provides seamless and secure integration between your on premises IT environment and AWS's storage infrastructure by scheduling snapshots that the gateway stores in Amazon S3 in the form of Amazon EBS snapshots Learn more AWS Trusted Advisor Provides automated performance management and availability control by identifying options to increase the availability and redundancy of your AWS application Learn more Extensive 3rd Party Solutions Provides secure data storage and automated availability control by easily connecting you with a market of applications of tools Learn more Managed AWS No SQL/SQL Database Services Provides secure and durable data storage automatically replicating data items across multiple Availability Zones in a Region to provide built in high av ailability and data durability Learn more:  Amazon D ynamo DB ArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 13 of 16  Amazon RDS Multi region deployment Provides geo diversity in compute locations power grids fault lines etc providing a variety of locations Learn more Route 53 health checks and DNS failover Monitors availability of stored backup data by allowing you to configure DNS failover in active active active passive and mixed configurations to improve the availability of your application Learn more Service Governance Feature Index The information above is presented by governance domain For your reference a summary of governance feature by major AWS services is described in the table below: AWS Service Governance Feature Amazon EC2 Amazon EC2 idempotency instance launch Amazon EC2 resource tagging Amazon Linux AMIs Amazon EC2 Dedicated Instances Amazon EC2 instance launch wizard Amazon EC2 security groups Elastic Load Balancing Elastic Load Balancing traffic distribution Amazon VPC Amazon VPC Amazon VPC logical isolation Amazon VPC network ACLs Amazon VPC private IP addresses Amazon VPC security groups Onpremise hardware/software VPN connections Amazon Route 53 Amazon Route 53 latency resource record sets Route 53 health Checks and DNS failover AWS Direct Connect AWS Direct Connect Amazon S3 Amazon S3 Access Control Lists (ACLs) ArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 14 of 16 Amazon S3 Bucket Policies Amazon S3 Query String Authentication Amazon S3 Client Side Encryption Amazon S3 Server Side Encryption Amazon S3 Object Expiration Amazon S3 server access logs Amazon S3 TCP selective acknowledgement Amazon S3 TCP window scaling Amazon Glacier Amazon Glacier vault inventory Amazon Glacier archives Amazon EBS Amazon EBS snapshots AWS Import/Export AWS Import/Export bulk datano… AWS Storage Gateway AWS Storage Gateway integration AWS Storage Gateway APIs Amazon CloudFront Amazon CloudFront Amazon CloudFront access logs Amazon RDS Amazon RDS database logs Amazon RDS Multi AZ Deployments Managed AWS No SQL/SQL Database Services Amazon Dynamo DB Managed AWS No SQL/SQL Database Services AWS Management Console Account Activity page AWS Account Billing AWS service pricing AWS Trusted Advisor Billing Alarms Consolidated billing Payasyougo pricing ArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 15 of 16 AWS CloudTrail Amazon Incident Management Team Amazon Simple Notification Service Multi region deployment AWS Identity and Access Management (IAM) AWS IAM Multi Factor Authentication (MFA) AWS IAM password policy AWS IAM Permissions AWS IAM Policies AWS IAM Roles Amazon CloudWatch AWS CloudWatch Dashboard Amazon CloudWatch alarms AWS Elastic Beanstalk AWS Elastic Beanstalk monitoring AWS CloudFormation AWS CloudFormation templates AWS Data Pipeline AWS Data Pipeline Task Runner AWS CloudHSM CloudHSM key storage AWS Marketplace Extensive 3rd Party Solutions Data Centers AWS SOC 1 physical access controls AWS SOC 2 Security physical access controls AWS PCI DSS physical access controls AWS ISO 27001 physical access controls AWS FedRAMP physical access controls Conclusion The primary focus of IT Governance is around managing resources security and performance in order to deliver value in strategic alignment with the goals of the business Given the rate growth and increasing complexity in technology it is increasingly challenging for onpremise environments to scale to provide the granular controls and features needed to deliver quality IT governance in a costefficient manner F or the s ame reasons that delivering infrastructure in the cloud has benefits over on premise delivery cloud based governance offers a lower cost of entry easier operations and improved agility by providing more oversight and automation that enables organizations to focus on their business ArchivedAmazon Web Services – Security at Scale: Governance in AWS October 2015 Page 16 of 16 References and Further Reading What can I do with AWS? http://awsamazoncom/solutions/awssolutions/ How can I get started with AWS? http://docsawsamazoncom/gettingstarted/latest/awsgsgintro/gsgaws introhtml
General
AWS_Storage_Services_Overview
AWS Storage Services Overview A Look at Storage Services Offered by AWS December 2016 Archived This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers© 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind wheth er express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are contr olled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedContents Abstract 6 Introduction 1 Amazon S3 1 Usage Patterns 2 Performance 3 Durability and Availability 4 Scalability and Elasticity 5 Security 5 Interfaces 6 Cost Model 7 Amazon Glacier 7 Usage Patterns 8 Performance 8 Durability and Availability 9 Scalability and Elasticity 9 Security 9 Interfaces 10 Cost Model 11 Amazon EFS 11 Usage Patterns 12 Performance 13 Durability and A vailability 15 Scalability and Elasticity 15 Security 15 Interfaces 16 Cost Model 16 ArchivedAmazon EBS 17 Usage Patterns 17 Performance 18 Durability and Availability 21 Scalability and Elasticity 22 Security 23 Interfaces 23 Cost Model 24 Amazon EC2 Instanc e Storage 24 Usage Patterns 26 Performance 27 Durability and Availability 28 Scalability and Elasticity 28 Security 29 Interfaces 29 Cost Model 30 AWS Storage Gateway 30 Usage Pa tterns 31 Performance 32 Durability and Availability 32 Scalability and Elasticity 32 Security 33 Interfaces 33 Cost Model 34 AWS Snowball 34 Usage Patterns 34 Performance 35 Durability and A vailability 36 ArchivedScalability and Elasticity 36 Security 36 Interfaces 37 Cost Model 38 Amazon CloudFront 39 Usage Patterns 39 Performance 40 Durability and A vailability 40 Scalability and Elasticity 40 Security 41 Interfaces 41 Cost Model 42 Conclusion 42 Contributors 43 References and Further Reading 44 AWS Storage Services 44 Other Re sources 44 ArchivedAbstract Amazon Web Services (AWS) is a flexible costeffective easy touse cloud computing platform This whitepaper is designed to help architects and developers understand the different storage services and features available in the AWS Cloud We provide an overview of each storage service or feature and describe usage patterns performance durability and availability scalability and elasticity security interfaces and the cost model ArchivedAmazon Web Services – AWS Storage Services Overview Page 1 Introduction Amazon Web Services (AWS) provides lowcost data storage with high durability and availability AWS offers storage choices for backup archiving and disaster recovery use cases and provides block file and object storage In this whitepaper we examine the following AWS Cloud storage services and features Amazon Simple Storage Service (Amazon S3) A service that provides scalable and highly durable object storage in the cloud Amazon Glacier A service that provides low cost highly durable archive storage in the cloud Amazon Elastic File System (Amazon EFS) A service that provides scalable network file storage for Amazon EC2 instances Amazon Elastic Block Store (Amazon EBS) A service that provides block storage volumes for Amazon EC2 instances Amazon EC2 Instance Storage Temporary block storage volumes for Amazon EC2 instances AWS Storage Gateway An on premises storage appliance that integrates with cloud storage AWS Snowball A service that transports large amounts of data to and from the cloud Amazon CloudFront A service that provides a global content delivery network (C DN) Amazon S3 Amazon Simple Storage Service (Amazon S3) provides developers and IT teams secure durable highly scalable object storage at a very low cost 1 You can store and retrieve any amount of data at any time from anywhere on the web through a simple web service interface You can write read and de lete objects containing from zero to 5 TB of data Amazon S3 is highly scalable allowing concurrent read or write access to data by many separate clients or application threads ArchivedAmazon Web Services – AWS Storage Services Overview Page 2 Amazon S3 offers a range of storage classes designed for different use cases including the following: • Amazon S3 Standard for general purpose storage of frequently accessed data • Amazon S3 Standard Infrequent Access (Standard IA) for long lived but less frequently accessed data • Amazon Glacier for low cost archival data Usage Pat terns There are four common usage patterns for Amazon S3 First Amazon S3 is used to store and distribute static web content and media This content can be delivered directly from Amazon S3 because each object in Amazon S3 has a unique HTTP URL Alternat ively Amazon S3 can serve as an origin store for a content delivery network (CDN) such as Amazon CloudFront The elasticity of Amazon S3 makes it particularly well suited for hosting web content that requires bandwidth for addressing extreme demand spike s Also because no storage provisioning is required Amazon S3 works well for fast growing websites hosting data intensive user generated content such as video and photo sharing sites Second Amazon S3 is used to host entire static websites Amazon S3 provides a lowcost highly available and highly scalable solution including storage for static HTML files images videos and client side scripts in formats such as JavaScript Third Amazon S3 is used as a data store for computation and large scale analytics such as financial transaction analysis clickstream analytics and media transcoding Because of the horizontal scalability of Amazon S3 you can access your data from multiple computing nodes concurrently without being constrained by a single co nnection Finally Amazon S3 is often used as a highly durable scalable and secure solution for backup and archiving of critical data You can easily move cold data to Amazon Glacier using lifecycle management rules on data stored in Amazon S3 You can a lso use Amazon S3 cross region replication to automatically copy objects across S3 buckets in different AWS Regions asynchronously providing disaster recovery solutions for business continuity 2 ArchivedAmazon Web Services – AWS Storage Services Overview Page 3 Amazon S3 doesn’t suit all storage situations The following table presents some storage needs for which you should consider other AWS storage options Storage Need Solution AWS Services File system Amazon S3 uses a flat namespace and isn’t meant to serve as a standalone POSIX compliant file system Instead consider using Amazon EFS as a file system Amazon EFS Structured data with query Amazon S3 doesn’t offer query capabilities to retrieve specific objects When you use Amazon S3 you need to know the exact bucket name and key for the files you want to retrieve from the service Amazon S3 can ’t be used as a database or search engine by it self Instead you can pair Amazon S3 with Amazon DynamoDB Amazon CloudSearch or Amazon Relational Data base Service (Amazon RDS) to index and query metadata about Amazon S3 buckets and objects Amazon Dynam oDB Amazon RDS Amazon CloudSearch Rapidly changing data Data that must be updated very frequently might be better served by storage solutions that take into acco unt read and write latencies such as Amazon EBS volumes Amazon RDS Amazon DynamoDB Amazon EFS or relational databases running on Amazon EC2 Amazon EBS Amazon EFS Amazon DynamoDB Amazon RDS Archival data Data that requires encrypted archival storage with infrequent read access with a long recovery time objective (RTO) can be stored in Amazon Glacier more costeffectively Amazon Glacier Dynamic website hosting Although Amazon S3 is ideal for static content websites dynamic websites that depend on database interaction or use serv erside scripting should be hosted on Amazon EC2 or Amazon EFS Amazon EC2 Amazon EFS Performance In scenarios where you use Amazon S3 from within Amazon EC2 in the same Region access to Amazon S3 from Amazon EC2 is designed to be fast Amazon S3 is also designed so that server side latencies are insignificant relative to Internet latencies In additi on Amazon S3 is built to scale storage requests and numbers of users to support an extremely large number of web scale applications If you access Amazon S3 using multiple threads multiple applications or multiple clients concurrently total Amazon S3 aggregate throughput typically scales to rates that far exceed what any single server can generate or consume ArchivedAmazon Web Services – AWS Storage Services Overview Page 4 To improve the upload performance of large objects (typically over 100 MB) Amazon S3 offers a multipart upload command to upload a single object as a set of parts 3 After all parts of your object are uploaded Amazon S3 assembles these parts and creates the object Using multipart upload you can get improved throughput and quick recovery from any network issues Another benefit of using multipart upload is that you can upload multiple parts of a single object in parallel and restart the upload of smaller parts instead of restarting the upload of the entire larg e object To speed up access to relevant data many developers pair Amazon S3 with a search engine such as Amazon CloudSearch or a database such as Amazon DynamoDB or Amazon RDS In these scenarios Amazon S3 stores the actual information and the search e ngine or database serves as the repository for associated metadata (for example the object name size keywords and so on) Metadata in the database can easily be indexed and queried making it very efficient to locate an object’s reference by using a se arch engine or a database query This result can be used to pinpoint and retrieve the object itself from Amazon S3 Amazon S3 Transfer Acceleration enables fast easy and secure transfer of files over long distances between your client and your Amazon S3 bucket It leverages Amazon CloudFront globally distributed edge locations to route traffic to your Amazon S3 bucket over an Amazon optimized network path To get started with Amazon S3 Transfer Acceleration you first must enable it on an Amazon S3 bucket Then modify your Amazon S3 PUT and GET requests to use the s3 accelerate endpoint domain name (<bucketname>s3 accelerateamazonawscom) The Amazon S3 bucket can still be accessed using the regular endpoint Some customers have measured performance impro vements in excess of 500 percent when performing intercontinental uploads Durability and Availability Amazon S3 Standard storage and Standard IA storage provide high level s of data durability and availability by automatically and synchronously storing your data across both multiple devices and multiple facilities within your selected geographical region Error correction is built in and there are no single points of failure Amazon S3 is designed to sustain the concurrent loss of data in two facilities making it very well suited to serve as the primary data storage for ArchivedAmazon Web Services – AWS Storage Services Overview Page 5 mission critical data In fact Amazon S3 is designed for 99999999999 percent (11 nines) durability per o bject and 9999 percent availability over a one year period Additionally you have a choice of enabling cross region replication on each Amazon S3 bucket Once enabled cross region replication automatically copies objects across buckets in different AWS Regions asynchronously providing 11 nines of durability and 4 nines of availability on both the source and destination Amazon S3 objects Scalability and Elasticity Amazon S3 has been designed to offer a very high level of automatic scalability and elasti city Unlike a typical file system that encounters issues when storing a large number of files in a directory Amazon S3 supports a virtually unlimited number of files in any bucket Also unlike a disk drive that has a limit on the total amount of data th at can be stored before you must partition the data across drives and/or servers an Amazon S3 bucket can store a virtually unlimited number of bytes You can store any number of objects (files) in a single bucket and Amazon S3 will automatically manage s caling and distributing redundant copies of your information to other servers in other locations in the same Region all using Amazon’s high performance infrastructure Security Amazon S3 is highly secure It provides multiple mechanisms for fine grained control of access to Amazon S3 resources and it supports encryption You can manage access to Amazon S3 by granting other AWS accounts and users permission to perform the resource operations by writing an access policy 4 You can protect Amazon S3 data at rest by using serve rside encryption 5 in which you request Amazon S3 to encrypt your object before it’s written to disks in data centers and decrypt it when you download the object or by using client side encryption 6 in which you encrypt your data on the client side and upload the encrypted data to Amazon S3 You can protect the data in transit by using Secure Sockets Layer (SSL) or client side encryption ArchivedAmazon Web Services – AWS Storage Services Overview Page 6 You can use versioning to preserve retrieve and restore every version of every object stored in your Amazon S3 bucket With versioning you can easily recover from both unintended user actions and application failures Additionally you can add an optional layer of security by enabling Multi Factor Authentication (MFA) Delete for a bucket 7 With this option enabled for a bucket two forms of authentication are re quired to change the versioning state of the bucket or to permanently delete an object version: valid AWS account credentials plus a six digit code (a single use time based password) from a physical or virtual token device To track requests for access t o your bucket you can enable access logging 8 Each access log record provides details about a single access request such as the requester bucket name request time request action response status and error code if any Access log information can be useful in security and access audits It can al so help you learn about your customer base and understand your Amazon S3 bill Interfaces Amazon S3 provides standards based REST web service application program interfaces (APIs) for both management and data operations These APIs allow Amazon S3 objects to be stored in uniquely named buckets (top level folders) Each object must have a unique object key (file name) that serves as an identifier for the object within that bucket Although Amazon S3 is a web based object store with a flat naming structure ra ther than a traditional file system you can easily emulate a file system hierarchy (folder1/folder2/file) in Amazon S3 by creating object key names that correspond to the full path name of each file Most developers building applications on Amazon S3 use a higher level toolkit or software development kit (SDK) that wraps the underlying REST API AWS SDKs are available for Android Browser iOS Java NET Nodejs PHP Python Ruby and Go The integrated AWS Command Line Interface (AWS CLI) also provides a set of high level Linux like Amazon S3 file commands for common operations such as ls cp mv sync and so on Using the AWS CLI for Amazon S3 you can perform recursive uploads and downloads using a single folder level Amazon S3 command and also per form parallel transfers You can also use the AWS CLI for command line access to the low level Amazon S3 API Using the AWS Management Console you can easily create and manage Amazon S3 buckets ArchivedAmazon Web Services – AWS Storage Services Overview Page 7 upload and download objects and browse the contents of your S3 buckets using a simple web based user interface Additionally you can use the Amazon S3 notification feature to receive notifications when certain events happen in your bucket Currently Amazon S3 can publish events when an object is uploaded or when an object is deleted Notifications can be issued to Amazon Simple Notification Service (SNS) topics 9 Amazon Simple Queue Service (SQS) queues 10 and AWS Lambda functions 11 Cost Model With Amazon S3 you pay only for the storage you actually use There is no minimum fee and no setup cost Amazon S3 Standard has three pricing components: storage (per GB per month) data tran sfer in or out (per GB per month) and requests (per thousand requests per month) For new customers AWS provides the AWS Free Tier which includes up to 5 GB of Amazon S3 storage 20000 get requests 2000 put requests and 15 GB of data transfer out each month for one year for free 12 You can find pricing information at the Amazon S3 pricing page 13 There are Data Transfer IN and OUT fees if you enable Amazon S3 Transfer Acceleration on a bucket and the transfer performance is faster than regular Amazon S3 transfer If we determine that Transfer Acceleration is not likely to be faster than a regular Amazon S3 transfer of the same object to the same destination we will not charge for that use of Transfer Acceleration for that transfer and may bypass the Transfer Acceleration system for that upload Amazon Glacier Amazon Glacier is an extremely low cost storage service that provides highly secure durable and flexible storage for data archiving and online backup 14 With Amazon Glacier you can reliably store your data for as little as $0007 per gigabyte per month Amazon Glacie r enables you to offload the administrative burdens of operating and scaling storage to AWS so that you don’t have to worry about capacity planning hardware provisioning data replication hardware failure detection and repair or time consuming hardware migrations You store data in Amazon Glacier as archives An archive can represent a single file or you can combine several files to be uploaded as a single archive ArchivedAmazon Web Services – AWS Storage Services Overview Page 8 Retrieving archives from Amazon Glacier requires the initiation of a job You organize yo ur archives in vaults Amazon Glacier is designed for use with other Amazon web services You can seamlessly move data between Amazon Glacier and Amazon S3 using S3 data lifecycle policies Usage Patterns Organizations are using Amazon Glacier to support a number of use cases These use cases include archiving offsite enterprise information media assets and research and scientific data and also performing digital preservation and magnetic tape replacement Amazon Glacier doesn’t suit all storage situatio ns The following table presents a few storage needs for which you should consider other AWS storage options Storage Need Solution AWS Services Rapidly changing data Data that must be updated very frequently might be better served by a storage solution w ith lower read/write latencies such as Amazon EBS Amazon RDS Amazon EFS Amazon DynamoDB or relational databases running on Amazon EC2 Amazon EBS Amazon RDS Amazon EFS Amazon DynamoDB Amazon EC2 Immediate access Data stored in Amazon Glacier is not available immediately Retrieval jobs typically require 3 –5 hours to complete so if you need immediate access to your object data Amazon S3 is a better choice Amazon S3 Performance Ama zon Glacier is a low cost storage service designed to store data that is infrequently accessed and long lived Amazon Glacier retrieval jobs typically complete in 3 to 5 hours You can improve the upload experience for larger archives by using multipart upload for archives up to about 40 TB (the single archive limit) 15 You can upload separate parts of a large archive independently in any order and in parallel t o improve the upload experience for larger archives You can even perform range retrievals on archives stored in Amazon Glacie r by specifying a range or portion ArchivedAmazon Web Services – AWS Storage Services Overview Page 9 of the archive 16 Specifying a range of bytes for a retrieval can help control bandwidth costs manage your data downloads and retrieve a targeted part of a large archive Durability and Availability Amazon Glacier is designed to provide average annual durability of 9999 9999999 percent (11 nines) for an archive The service redundantly stores data in multiple facilities and on multiple devices within each facility To increase durability Amazon Glacier synchronously stores your data across multiple facilities before retu rning SUCCESS on uploading an archive Unlike traditional systems which can require laborious data verification and manual repair Amazon Glacier performs regular systematic data integrity checks and is built to be automatically self healing Scalability and Elasticity Amazon Glacier scales to meet growing and often unpredictable storage requirements A single archive is limited to 40 TB in size but there is no limit to the total amount of data you can store in the service Whether you’re storing petabyt es or gigabytes Amazon Glacier automatically scales your storage up or down as needed Security By default only you can access your Amazon Glacier data If other people need to access your data you can set up data access control in Amazon Glacier by usi ng the AWS Identity and Access Management (IAM) service 17 To do so simply create an IAM policy that specifies which account users have rights to operations on a given vault Amazon Glacier uses server side encr yption to encrypt all data at rest Amazon Glacier handles key management and key protection for you by using one of the strongest block ciphers available 256 bit Advanced Encryption Standard (AES 256) Customers who want to manage their own keys can enc rypt data prior to uploading it ArchivedAmazon Web Services – AWS Storage Services Overview Page 10 Amazon Glacier allows you to lock vaults where long term records retention is mandated by regulations or compliance rules You can set compliance controls on individual Amazon Glacier vaults and enforce these by using locka ble policies For example you might specify controls such as “undeletable records” or “time based data retention” in a Vault Lock policy and then lock the policy from future edits After it’s locked the policy becomes immutable and Amazon Glacier enforces the prescribed controls to help achieve your compliance objectives To help monitor data access Amazon Glacier is integrated with AWS CloudTrail allowing any API calls made to Amazon Glac ier in your AWS account to be captured and stored in log files that are delivered to an Amazon S3 bucket that you specify 18 Interfaces There are two ways to use Amazon Glacier each with its own interfaces The Amazon Glacier API provides both management and data operations First Amazon Glacier provides a native standards based REST web services interface This interface can be accessed using the Java SDK or the NET SDK You can use the AWS Management Console or Amazon Glacier API actions to create vau lts to organize the archives in Amazon Glacier You can then use the Amazon Glacier API actions to upload and retrieve archives to monitor the status of your jobs and also to configure your vault to send you a notification through Amazon SNS when a job is complete Second Amazon Glacier can be used as a storage class in Amazon S3 by using object lifecycle management that provides automatic policy driven archiving from Amazon S3 to Amazon Glacier You simply se t one or more lifecycle rules for an Amazon S3 bucket defining what objects should be transitioned to Amazon Glacier and when You can specify an absolute or relative time period (including 0 days) after which the specified Amazon S3 objects should be transitioned to Amazon Glacier The Amazon S3 API includes a RESTORE operation The retrieval process from Amazon Glacier using RESTORE takes three to five hours the same as other Amazon Glacier retrievals Retrieval puts a copy of the retrieved object in Am azon S3 Reduced Redundancy Storage (RRS) for a specified retention period The original archived object ArchivedAmazon Web Services – AWS Storage Services Overview Page 11 remains stored in Amazon Glacier For more information on how to use Amazon Glacier from Amazon S3 see the Object Lifecycle Management section of the Amazon S3 Developer Guide 19 Note that when using Amazon Glacier as a storage class in Amazon S3 you use the Amazon S3 API and when using “native” Amazon Glacier you use the Amazon Glacier API For example objects archived to Amazon Glacier using Amazon S3 lifecycle policies can only be listed and retrieved by using the Amazon S3 API or the Amazon S3 console You can ’t see them as archives in an Amazon Glacier vault Cost Model With Amazon Glacier you pay only for what you use and there is no minimum fee In normal use Amazon Glacier has three pricing components: storage (per GB per month) data transfer out (per GB per month) and requests (per thousand UPLOAD and R ETRIEVAL requests per month) Note that Amazon Glacier is designed with the expectation that retrievals are infrequent and unusual and data will be stored for extended periods of time You can retrieve up to 5 percent of your average monthly storage (pror ated daily) for free each month If you retrieve more than this amount of data in a month you are charged an additional (per GB) retrieval fee A prorated charge (per GB) also applies for items deleted prior to 90 days’ passage You can find pricing infor mation at the Amazon Glacier pricing page 20 Amazon EFS Amazon Elastic File System (Amazon EFS) delivers a simple scalable elastic highly available and highly durable network file system as a service to EC2 instances 21 It supports Network File System versions 4 (NFSv4) and 41 (NFSv41) which makes it easy to migrate enterprise applications to AWS or build new ones We recommend clients run NFSv41 to take advantage of the many performance benefits found in the latest version including scalability and parallelism You can create and configure file systems quickly and easily through a simple web services interface You don’t need to provision storag e in advance and there is no minimum fee or setup cost —you simply pay for what you use Amazon EFS is designed to provide a highly scalable network file system that can grow to petabytes which allows massively parallel access from EC2 instances to ArchivedAmazon Web Services – AWS Storage Services Overview Page 12 your da ta within a Region It is also highly available and highly durable because it stores data and metadata across multiple Availability Zones in a Region To understand Amazon EFS it is best to examine the different components that allow EC2 instances access to EFS file systems You can create one or more EFS file systems within an AWS Region Each file system is accessed by EC2 instances via mount targets which are created pe r Availability Zone You create one mount target per Availability Zone in the VPC you create using Amazon Virtual Private Cloud Traffic flow between Amazon EFS and EC2 instances is controlled using security groups associated with the EC2 instance and the EFS mount targets Access to EFS file system objects (files and directories) is controlled using standard Unix style read/write/execute permissions based on user and group IDs You can find more information about how EFS works in the Amazon EFS User Guide 22 Usage Patterns Amazon EFS is designed to meet the needs of multi threaded applications and applications that concurrently access data from multiple EC2 instances and that require substantial levels of aggregate throughput and input/output operations per second (IOPS) Its distributed design enables high levels of availability durability and scalability which results in a small latency overhead for each file operation Because o f this per operation overhead overall throughput generally increases as the average input/output (I/O) size increases since the overhead is amortized over a larger amount of data This makes Amazon EFS ideal for growing datasets consisting of larger files that need both high performance and multi client access Amazon EFS supports highly parallelized workloads and is designed to meet the performance needs of big data and analytics media processing content management web serving and home directories Amazon EFS doesn’t suit all storage situations The following table presents some storage needs for which you should consider other AWS storage options Storage Need Solution AWS Services Archival data Data that requires encrypted archival storage with infrequent read access with a long recovery time objective (RTO) can be stored in Amazon Glacier more costeffectively Amazon Glacier ArchivedAmazon Web Services – AWS Storage Services Overview Page 13 Storage Need Solution AWS Services Relational database storage In most cases relational databases require storage that is mounted accessed and locked by a single node (EC2 instance etc) When running relational databases on AWS look at leveraging Amazon RDS or Amazon EC2 with Amazon EBS PIOPS volumes Amazon RDS Amazon EC2 Amazon EBS Temporary storage Consider using local instance store volumes for needs such as scratch disks buffers queues and caches Amazon EC2 Local Instance Store Performance Amazon EFS file systems are distributed across an unconstrained number of storage servers e nabling file systems to grow elastically to petabyte scale and allowing massively parallel access from EC2 instances within a Region This distributed data storage design means that multi threaded applications and applications that concurrently access dat a from multiple EC2 instances can drive substantial levels of aggregate throughput and IOPS There are two different performance modes available for Amazon EFS: General Purpose and Max I/O General Purpose performance mode is the default mode and is approp riate for most file systems However i f your overall Amazon EFS workload will exceed 7000 file operations per second per file system we recommend the files system use Max I/O performance mode Max I/O performance mode is optimized for applications where tens hundreds or thousands of EC2 instances are accessing the file system With this mode file systems scale to higher levels of aggregate throughput and operations per second with a tradeoff of slightly higher latencies for file operations Due to the spiky nature of file based workloads Amazon EFS is optimized to burst at high throughput levels for short periods of time while delivering low levels of throughput the rest of the time A credit system determines when an Amazon EFS file system can burst Over time each file system earns burst credits at a baseline rate determined by the size of the file system and uses these credits whenever it reads or writes data A file system can drive throughput continuously at its baseline rate It accumulates c redits during periods of inactivity or when throughput is below its baseline rate These accumulated burst credits allow a file system to drive throughput above its baseline rate The file system can continue to drive throughput above its baseline rate as long as it has a positive burst credit ArchivedAmazon Web Services – AWS Storage Services Overview Page 14 balance You can see the burst credit balance for a file system by viewing the BurstCreditBalance metric in Amazon CloudWatch 23 Newly created file systems start with a credit balance of 21 TiB with a baseline rate of 50 MiB/s per TiB of storage and a burst rate of 100 MiB/s The following list describes some examples of bursting behaviors for file systems of different sizes File system size (GiB) Baseline aggregate throughput (MiB/s) Burst aggregate throughput (MiB/s) Maximum burst duration (hours) % of time file system can burst 10 05 100 60 05% 256 125 100 69 125% 512 250 100 80 250% 1024 500 100 120 500% 1536 750 150 120 500% 2048 1000 200 120 500% 3072 1500 300 120 500% 4096 2000 400 120 500% Here are a few recommendations to get the most performance out of your Amazon EFS file system Because of the distributed architecture of Amazon EFS larger I/O workloads generally experience higher throughput EFS file systems can be mounted by thousands of EC2 instances concurrently If your application is parallelizable across multiple instances you can drive higher throughput levels on your file system in aggregate across instances If your application can handle asynchronous writes to your file system and you’re able to trade off consistency for speed enabling asynchronous writes may improve performance We recommend Linux kernel version 4 or later and NFSv41 for all clients accessing EFS file systems When mounting EFS file systems use the mount o ptions recommended in the Mounting File Systems and Additional Mounting Considerations sections of the Amazon EFS User Guide 24 25 ArchivedAmazon Web Services – AWS Storage Services Overview Page 15 Durability and Availability Amazon EFS is designed to be highly durable and highly available Each Amazon EFS file system object ( such as a directory file or link) is redundantly stored across multiple Availabilit y Zones within a Region Amazon EFS is designed to be as highly durable and available as Amazon S3 Scalability and Elasticity Amazon EFS automatically scales your file system storage capacity up or down as you add or remove files without disrupting your a pplications giving you just the storage you need when you need it and while eliminating time consuming administration tasks associated with traditional storage management ( such as planning buying provisioning and monitoring) Your EFS file system can grow from an empty file system to multiple petabytes automatically and there is no provisioning allocating or administration Security There are three levels of access control to consider when planning your EFS file system security: IAM permissions for API calls; security groups for EC2 instances and mount targets; and Network File System level users groups and permissions IAM enables access control for administering EFS file systems allowing you to specify an IAM identity ( either an IAM user or IAM role) so you can create delete and describe EFS file system resources The primary resource in Amazon EFS is a file system All other EFS resources such as mount targets and tags are referred to as subresources Identity based policies like IAM polic ies are used to assign permissions to IAM identities to manage the EFS resources and subresources Security groups play a critical role in establishing network connectivity between EC2 instances and EFS file systems You associate one security group with an EC2 instance and another security group with an EFS mount target associated with the file system These sec urity groups act as firewalls and enforce rules that define the traffic flow between EC2 instances and EFS file systems EFS file system objects work in a Unix style mode which defines permissions needed to perform actions on objects Users and groups are mapped to numeric ArchivedAmazon Web Services – AWS Storage Services Overview Page 16 identifiers which are mapped to EFS users to represent file ownership Files and directories within Amazon EFS are owned by a single owner and a single group Amazon EFS uses these numeric IDs to check permissions when a user attempts to access a file system object For more information about Amazon EFS security see the Amazon EFS User Guide 26 Interfaces Amazon offers a network protocol based HTTP (RFC 2616) API for managing Amazon EFS as well as support ing for EFS operations within the AWS SDKs and the AWS CLI The API actions and EFS operations are used to create delete and describe file systems; crea te delete and describe mount targets; create delete and describe tags; and describe and modify mount target security groups If you prefer to work with a graphical user interface the AWS Management Console gives you all the capabilities of the API in a browser interface EFS file systems use Network File System version 4 (NFSv4) and version 41 (NFSv41) for data access We recommend using NFSv41 to take advantage of the performance benefits in the latest version including scalability and parallelis m Cost Model Amazon EFS provides the capacity you need when you need it without having to provision storage in advance It is also designed to be highly available and highly durable as each file system object ( such as a directory file or link) is redu ndantly stored across multiple Availability Zones This highly durable highly available architecture is built into the pricing model and you only pay for the amount of storage you put into your file system As files are added your EFS file system dynami cally grows and you only pay for the amount of storage you use As files are removed your EFS file system dynamically shrinks and you stop paying for the data you deleted There are no charges for bandwidth or requests and there are no minimum commitme nts or up front fees You can find pricing information for Amazon EFS at the Amazon EF S pricing page 27 ArchivedAmazon Web Services – AWS Storage Services Overview Page 17 Amazon EBS Amazon Elastic Block Store (Amazon EBS) volumes provide durable block level storage for use with EC2 instances 28 Amazon EBS volumes are network attached storage that persists independently from the running life of a single EC2 instance After an EBS volume is attached to an EC2 instance you can use t he EBS volume like a physical hard drive typically by formatting it with the file system of your choice and using the file I/O interface provided by the instance operating system Most Amazon Machine Images (AMIs) are backed by Amazon EBS and use an EBS volume to boot EC2 instance s You can also attach multiple EBS volumes to a single EC2 instance Note however that any single EBS volume can be attached to only one EC2 instance at any time EBS also provides the ability to create point intime snapshots of volumes which are stored in Amazon S3 These snapshots can be used as the starting point for new EBS volumes and to protect data for long term durability To learn more about Amazon EBS durability see the EBS Durability and Availability section of this whitepaper The same snapshot can be used to instantiate as many volumes as you want These snapshots can be copied across AWS Regions making it easier to leverage multiple AWS Regions for geographical expansion data center migration and disaster recovery Sizes for EBS volumes range from 1 GiB to 16 TiB depending on the volume type and are allocated in 1 GiB increments You can find information about Amazon EBS previous generation Magne tic volumes at the Amazon EBS Previous Generation Volumes page 29 Usage Patterns Amazon EBS is meant for data that changes relatively frequently and needs to persist beyond the life of EC2 ins tance Amazon EBS is well suited for use as the primary storage for a database or file system or for any application or instance (operating system) that requires direct access to raw block level storage Amazon EBS provides a range of options that allow y ou to optimize storage performance and cost for your workload These options are divided into two major categories: solid state drive ( SSD )backed storage for transactional workloads such as databases and boot volumes (performance depends primarily on IOPS ) and hard disk drive ( HDD )backed storage for throughput intensive workloads such as big data data warehouse and log processing (performance depends primarily on MB/s) ArchivedAmazon Web Services – AWS Storage Services Overview Page 18 Amazon EBS doesn’t suit all storage situations The following table presents some storage needs for which you should consider other AWS storage options Storage Need Solution AWS Services Temporary storage Consider using local instance store volumes for needs such as scratch disks buffers queues and caches Amazon Local Instance Store Multi instance storage Amazon EBS volumes can only be attached to one EC2 instance at a time If you need multiple EC2 instances accessing vo lume data at the same time consider using Amazon EFS as a file system Amazon EFS Highly durable storage If you need very highly durable storage use S3 or Amazon EFS Amazon S3 Standard storage is designed for 99999999999 percent (11 nines) annual durability per object You can even decide to take a snapshot of the EBS volumes Such a snapshot then gets saved in Amazon S3 thus providing you the durability of Amazon S3 For more information on EBS durability see the Durability and Availability section EFS is designed for high durability and high availability with data stored in multiple Availability Zones within an AWS Region Amazon S3 Amazon EFS Static data or web content If your data doesn’t change that often Amazon S3 might represent a more cost effective and scalable solution for storing this fixed information Also web content served out of Amazon EBS requires a web server running on Amazon EC2; in contrast you can deliver web content directly out of Amazon S3 or from multiple EC2 instances using Amazon EFS Amazon S3 Amazon EFS Performance As described previously Amazon EBS provides a range of volume types that are divided into two major categories: SSD backed storage volumes and HDD backed storage volumes SSD backed storage volumes offer great price/performance characteristics for random small block workloads such as transactional applications whereas HDD backed storage volumes offer the best price/performance characteristics for large block sequential workloads You can attach and stripe data across multiple volumes of any type to increase the I/O performance available to your Amazon EC2 applications The following table presents the storage characteristics of the current generat ion volume types ArchivedAmazon Web Services – AWS Storage Services Overview Page 19 SSDBacked Provisioned IOPS (io1) SSDBacked General Purpose (gp2)* HDD Backed Throughput Optimized (st1) HDD Backed Cold (sc1) Use Cases I/Ointensive NoSQL and relational databases Boot volumes lowlatency interactive apps dev & test Big data data warehouse log processing Colder data requiring fewer scans per day Volume Size 4 GiB – 16 TiB 1 GiB – 16 TiB 500 GiB – 16 TiB 500 GiB – 16 TiB Max IOPS** per Volume 20000 10000 500 250 Max Throughput per Volume 320 MiB/s 160 MiB/s 500 MiB/s 250 MiB/s Max IOPS per Instance 65000 65000 65000 65000 Max Throughput per Instance 1250 MiB/s 1250 MiB/s 1250 MiB/s 1250 MiB/s Dominant Performance Attribute IOPS IOPS MiB/s MiB/s *Default volume type **io1/gp2 based on 16 KiB I/O; st1/sc1 based on 1 MiB I/O General Purpose SSD (gp2) volumes offer cost effective storage that is ideal for a broad range of workloads These volumes deliver single digit millisecond latencies the ability to burst to 3000 IOPS for extended periods of time and a baseline performance of 3 IOPS/GiB up to a maximum of 10000 IOPS (at 3334 GiB) The gp2 volumes can range in size from 1 GiB to 16 TiB These volumes have a throughput limit range of 128 MiB/second for volumes less than or equal to 170 GiB; for volumes over 170 GiB this limit increases at the ra te of 768 KiB/second per GiB to a maximum of 160 MiB/second (at 214 GiB and larger) You can see the percentage of I/O credits remaining in the burst buckets for gp2 volumes by viewing the Burst Balance metric in Amazon CloudWatch 30 Provisioned IOPS SSD (io1) volumes are designed to deliver predictable high performance for I/O intensive workloads with small I/O size where the dominant performance attribute is IOPS such as database workloads that are sensitive to ArchivedAmazon Web Services – AWS Storage Services Overview Page 20 storage performance and consistency in random access I/O throughput You specify an IOPS rate when creating a volume an d then Amazon EBS delivers within 10 percent of the provisioned IOPS performance 999 percent of the time over a given year when attached to an EBS optimized instance The io1 volumes can range in size from 4 G iB to 16 T iB and you can provision up to 20 000 IOPS per volume The ratio of IOPS provisioned to the volume size requested can be a maximum of 50 For example a volume with 5000 IOPS must be at least 100 GB in size Throughput Optimized HDD (st1) volumes are ideal for frequently accessed through putintensive workloads with large datasets and large I/O sizes where the dominant performance attribute is throughput (MiB/s) such as streaming workloads big data data warehouse log processing and ETL workloads These volumes deliver performance in terms of throughput measured in M iB/s and include the ability to burst up to 250M iB/s per T iB with a baseline throughput of 40M iB/s per T iB and a maximum throughput of 500M iB/s per volume The st1 volumes are designed to deliver the expected throughput performance 99 percent of the time and has enough I/O credits to support a full volume scan at the burst rate The st1 volumes can’t be used as boot volumes You can see the throughput credits remaining in the burst bucket for st1 vol umes by viewing the Burst Balance metric in Amazon CloudWatch 31 Cold HDD (sc1) volumes provide the lowest cost per G iB of all EBS volume types These are ideal for infrequently accessed workloads with large cold datasets with large I/O sizes where the dominant performance attribute is throughput (MiB/s) Similarly to st1 sc1 volumes provide a burst model and can burst up to 80 MiB/s per TiB with a basel ine throughput of 12 M iB/s per T iB and a maximum throughput of 250 MB/s per volume The sc1 volumes are designed to deliver the expected throughput performance 99 percent of the time and have enough I/O credits to support a full volume scan at the burst r ate The sc1 volumes can’t be used as boot volumes You can see the throughput credits remaining in the burst bucket for s c1 volumes by viewing the Burst Balance metric in CloudWatch 32 Because all EBS volumes are network attached devices other network I/O performed by an EC2 instance as well as the total load on the shared network can affect the performance of individual EBS volumes To enab le your EC2 instances to maximize the performance of EBS volumes you can launch selected EC2 instance types as EBS optimized instances Most of the latest generation ArchivedAmazon Web Services – AWS Storage Services Overview Page 21 EC2 instances (m4 c4 x1 and p 2) are EBS optimized by de fault EBS optimized instances deliver dedicated throughput between Amazon EC2 and Amazon EBS with speeds between 500 Mbps and 10000 Mbps depending on the instance type When attached to EBS optimized instances provisioned IOPS volumes are designed to deliver within 10 percent of the provisioned IOPS performance 999 percent of the time within a given year Newly created EBS volumes receive their maximum performance the moment they are available and they don’t require initialization (formerly known as prewarming) However you must i nitialize the storage blocks on volumes that were restored from snapshots before you can access the block 33 Using Amazon EC2 with Amazon EBS you can take advantage of many of the same disk performance optimization techniques that you do with on premises servers and storage For example by attaching multiple EBS volumes to a single EC2 instance you can partition the total application I/O load by allocating one volume for database log data one or more volumes for database file storage and other volumes for file system data Each separate EBS volume can be configured as EBS General Purpose (SSD) Provisioned IOPS (SSD) Throughput Optimized (HDD) or Cold (HDD) as needed Some of the best price/performance balanced workloads take advantage of different v olume types on a single EC2 instance For example Cassandra using General Purpose (SSD) volumes for data but Throughput Optimized (HDD) volumes for logs or Hadoop using General Purpose (SSD) volumes for both data and logs Alternatively you can stripe your data across multiple similarly provisioned EBS volumes using RAID 0 (disk striping) or logical volume manager software thus aggregating available IOPS total volume throughput and total volume size Durability and Avail ability Amazon EBS volumes are designed to be highly available and reliable EBS volume data is replicated across multiple servers in a single Availability Zone to prevent the loss of data from the failure of any single component Taking snapshots of your EBS volumes increases the durability of the data stored on your EBS volumes EBS snapshots are incremental point intime backups containing only the data blocks changed since the last snapshot EBS volumes are designed for an annual failure rate (AFR) of between 01 and 02 percent where failure refers to a complete or partial loss of the volume depending on the size and performance of the volume This means if you have 1000 EBS volumes over the course of a year you can expect unrecoverable failures with 1 or 2 of your ArchivedAmazon Web Services – AWS Storage Services Overview Page 22 volumes This AFR makes EBS volumes 20 times more reliable than typical commodity disk drives which fail with an AFR of around 4 percent Despite these very low EBS AFR numbers we still recommend that you create snapshot s of your EBS volumes to improve the durability of your data The Amazon EBS snapshot feature makes it easy to take application consistent backups of your data For more information on EBS durability see the Amazon EBS Availability and Durability section of the Amazon EBS Product Details page 34 To maximize both durability and availability of Amazon EBS data you should create snapshots of your EBS volumes frequently (For application consistent backups we recommend b riefly pausing any write operations to the volume or unmounting the volume while you issue the snapshot command You can then safely continue to use the volume while the snapshot is pending completion) All EBS volume types offer durable snapshot capabil ities and are designed for 99999 percent availability If your EBS volume does fail all snapshots of that volume remain intact and you can recreate your volume from the last snapshot point Because an EBS volume is created in a particular Availability Z one the volume will be unavailable if the Availability Zone itself is unavailable A snapshot of a volume however is available across all of the Availability Zones within a Region and you can use a snapshot to create one or more new EBS volumes in any Availability Zone in the region EBS snapshots can also be copied from one Region to another and can easily be shared with other user accounts Thus EBS snapshots provide an easy touse disk clone or disk image mechanism for backup sharing and disaster recovery Scalability and Elasticity Using the AWS Management Console or the Amazon EBS API you can easily and rapidly provision and release EBS volumes to scale in and out with your total storage demands The simplest approach is to create and attach a new EBS volume and begin using it together with your existing ones However if you need to expand the size of a single EBS volume you can effectively resize a volume using a snapshot: 1 Detach the original EBS volume 2 Create a snapshot of the original EBS volume’s data in Amazon S3 ArchivedAmazon Web Services – AWS Storage Services Overview Page 23 3 Create a new EBS volume from the snapshot but specify a larger size than the original volume 4 Attach the new larger volume to your EC2 instance in place of the original (In many cases an OS level utility must also be used to expand the file system) 5 Delete the original EBS volume Security IAM enables access control for your EBS volumes allowing you to specify who can access which EBS volumes EBS encryption enables data atrest and data inmotion security It offers seamless encryption of both EBS boot volumes and data volumes as well as snapshots eliminating the need to build and manage a secure key management infrastructure These encryption keys are Amazon managed or keys that you create and manage using t he AWS Key Management Service (AWS KMS) 35 Data inmotion security occurs on the servers that host EC2 instances providing encryption of data as it moves between EC2 instances and EBS volumes Access control plu s encryption offers a strong defense indepth security strategy for your data For more information see Amazon EBS Encryption in the Amazon EBS User Guide 36 Interfaces Amazon offers a REST management API for Amazon EBS as well as support for Amazon EBS operations within both the AWS SDKs and the AWS CLI The API actions and EBS operations are used to create delete describe attach and detach EBS volumes for your EC2 instances; to create delete and describe snapshots from Amazon EBS to Amazon S3; and to copy snapshots from one region to another If you prefer to work with a graphical user interface the AWS Management Console gives you all the capabilities of the API in a browser interface Regardless of how you create your EBS volume note that all storage is allocated at the time of volume creation and that you are charged for this allocated storage even if you don’t write data to it ArchivedAmazon Web Services – AWS Storage Services Overview Page 24 Amazon EBS doesn’t provide a d ata API Instead Amazon EBS presents a block device interface to the EC2 instance That is to the EC2 instance an EBS volume appears just like a local disk drive To write to and read data from EBS volumes you use the native file system I/O interfaces of your chosen operating system Cost Model As with other AWS services with Amazon EBS you pay only for what you provision in increments down to 1 GB In contrast hard disks come in fixed sizes and you pay for the entire size of the disk regardless of the amount you use or allocate Amazon EBS pricing has three components: provisioned storage I/O requests and snapshot storage Amazon EBS General Purpose (SSD) Throughput Optimized (HDD) and Cold (HDD) volumes are charged per GB month of provisioned s torage Amazon EBS Provisioned IOPS (SSD) volumes are charged per GB month of provisioned storage and per provisioned IOPS month For all volume types Amazon EBS snapshots are charged per GB month of data stored An Amazon EBS snapshot copy is charged fo r the data transferred between R egions and for the standard Amazon EBS snaps hot charges in the destination R egion It’s important to remember that for EBS volumes you are charged for provisioned (allocated) storage whether or not you actually use it For Amazon EBS snapshots you are charged only for storage actually used (consumed) Note that Amazon EBS snapshots are incremental so the storage used in any snapshot is generally much less than the storage consumed for an EBS volume Note that there is no ch arge for transferring information among the various AWS storage offerings (that is an EC2 instance transferring information with Amazon EBS Amazon S3 Amazon RDS and so on) as long as the storage off erings are within the same AWS R egion You can find pr icing information for Amazon EBS at the Amazon EBS pricing page 37 Amazon EC2 Instance Storage Amazon EC2 instance st ore volumes (also called ephemeral drives) provide temporary block level storage for many EC2 instance types 38 This storage consists of a preconfigured and pre attached block of disk storage on the same ArchivedAmazon Web Services – AWS Storage Services Overview Page 25 physical server that hosts the EC2 instance for which the block provides storage The amount of the disk storage provided varies by EC2 instance type In the EC2 instance families that provide instance storage larger instances tend to provide both more and larger instance store volumes Note that some instance types such as the micro instances (t1 t2) and the Compute optimized c4 instances use EBS storage only w ith no instance storage provided Note also that instances using Amazon EBS for the root device (in other words that boot from Amazon EBS) don’t expose the instance store volumes by default You can choose to expose the instance store volumes at instance l aunch time by specifying a block device mapping For more information see Block Device Mapping in the Amazon EC2 User Guide 39 AWS offers two EC2 inst ance families that are purposely built for storage centric workloads Performance specificat ions of the storage optimized (i2) and dense storage (d2) instance families are outlined in the following table SSDBacked Storage Optimized (i2) HDD Backed Dense Storage (d2) Use Cases NoSQL databases like Cassandra and MongoDB scale out transactional databases data warehousing Hadoop and cluster file systems Massively Parallel Processing (MPP) data warehousing MapReduce and Hadoop distributed computing distributed file systems network file systems log or data processing applications Read Performance 365000 Random IOPS 35 G iB/s* Write Performance 315000 Random IOPS 31 G iB/s* Instance Store Max Capacity 64 T iB SSD 48 TiB HDD Optimized For Very high random IOPS High disk throughput * 2MiB block size ArchivedAmazon Web Services – AWS Storage Services Overview Page 26 Usage Patterns In general EC2 local instance store volumes are ideal for temporary storage of information that is continually changing such as buffers caches scratch data and other temporary content or for data that is replicated across a fleet of instances such as a load balanced pool of web servers EC2 instance storage is wellsuited for this purpose It cons ists of the virtual machine’s boot device (for instance store AMIs only) plus one or more additional volumes that are dedicated to the EC2 instance (for both Amazon EBS AMIs and instance store AMIs) This storage can only be used from a single EC2 instanc e during that instance's lifetime Note that unlike EBS volumes instance store volumes cannot be detached or attached to another instance For high I/O and high storage use EC2 instance storage targeted to these use cases High I/O instances (the i2 family) provide instance store volumes backed by SSD and are ideally suited for many high performance database workloads Example applications include NoSQL databases like Cassandra and MongoDB clustered databases and online transaction processing (OLT P) systems High storage instances (the d2 family) support much higher storage density per EC2 instance and are ideally suited for applications that benefit from high sequential I/O performance across very large datasets Example applications include data warehouses Hadoop/MapReduce storage nodes and parallel file systems Note that applications using instance storage for persistent data generally provide data durability through replication or by periodically copying data to durable storage EC2 instance store volumes don’t suit all storage situations The following table presents some storage needs for which you should consider other AWS storage options Storage Need Solution AWS Services Persistent storage If you need persistent virtual disk storage si milar to a physical disk drive for files or other data that must persist longer than the lifetime of a single EC2 instance EBS volumes Amazon EFS file systems or Amazon S3 are more appropriate Amazon EC2 Amazon EBS Amazon EFS Amazon S3 Relational database storage In most cases relational databases require storage that persists beyond the lifetime of a single EC2 instance making EBS volumes the natural choice Amazon EC2 Amazon EBS ArchivedAmazon Web Services – AWS Storage Services Overview Page 27 Storage Need Solution AWS Services Shared storage Instance store volumes are dedicated to a single EC2 instance and can’t be shared with other systems or users If you need storage that can be detached from one instance and attached to a different instance or if you need the ability to share data easily Amazon EFS Amazon S3 or Amazon EBS are better choice s Amazon EFS Amazon S3 Amazon EBS Snapshots If you need the convenience long term durability availability and the ability to share point intime disk snapshots EBS volumes are a better choice Amazon EBS Performance The instance store volumes that are not SSD based in most EC2 instance families have performance characteristics similar to standard EBS volumes Because the EC2 instance virtual machine and the local instance store volumes are located on the same physical server interaction with this storage is very fast particularly for sequential acc ess To increase aggregate IOPS or to improve sequential disk throughput multiple instance store volumes can be grouped together using RAID 0 (disk striping) software Because the bandwidth of the disks is not limited by the network aggregate sequential throughput for multiple instance volumes can be higher than for the same number of EBS volumes Because of the way that EC2 virtualizes disks the first write operation to any location on an instance store volume performs more slowly than subsequent write s For most applications amortizing this cost over the lifetime of the instance is acceptable However if you require high disk performance we recommend that you prewarm your drives by writing once to every drive location before production use The i2 r3 and hi1 instance types use direct attached SSD backing that provides maximum performance at launch time without prewarming Additionally r3 and i2 instance store backed volumes support the TRIM command on Linux instances For these volumes you can us e TRIM to notify the SSD controller whenever you no longer need data that you've written This notification lets the controller free space which can reduce write amplification and increase performance ArchivedAmazon Web Services – AWS Storage Services Overview Page 28 The SSD instance store volumes in EC2 high I/O instan ces provide from tens of thousands to hundreds of thousands of low latency random 4 KB random IOPS Because of the I/O characteristics of SSD devices write performance can be variable For more information see High I/O Instances in the Amazon EC2 User Guide 40 The instance store volumes in EC2 high storage instances provide very high storage density and high sequential read and write performance For more information see High Storage Instances in the Amazon EC2 User Guide 41 Durability and Availability Amazon EC2 local instance store volumes are not intended to be used as durable disk storage Unlike Amazon EBS volume data data on instance store volumes persists only during the life of the associated EC2 instance This functionality means that data on instance store volumes is persistent across orderly instance reboots but if the EC2 instance is stopped and restarted terminates or fails all data on the instance sto re volumes is lost For more information on the lifecycle of an EC2 instance see Instance Lifecycle in the Amazon EC2 User Guide 42 You should not use local ins tance store volumes for any data that must persist over time such as permanent file or database storage without providing data persistence by replicating data or periodically copying data to durable storage such as Amazon EBS or Amazon S3 Note that this usage recommendation also applies to the special purpose SSD and high density instance store volumes in the high I/O and high storage instance types Scalability and Elasticity The number and storage capacity of Amazon EC2 local instance store volumes are fixed and defined by the instance type Although you can’t increase or decrease the number of instance store volumes on a single EC2 instance this storage is still scalable and elastic; you can scale the total amount of instance store up or down by incre asing or decreasing the number of running EC2 instances To achieve full storage elasticity include one of the other suitable storage options such as Amazon S3 Amazon EFS or Amazon EBS in your Amazon EC2 storage strategy ArchivedAmazon Web Services – AWS Storage Services Overview Page 29 Security IAM helps you secure ly control which users can perform operations such as launch and termination of EC2 instances in your account and instance store volumes can only be mounted and accessed by the EC2 instances they belong to Also when you stop or terminate an instance th e applications and data in its instance store are erased so no other instance can have access to the instance store in the future Access to an EC2 instance is controlled by the guest operating system If you are concerned about the privacy of sensitive d ata stored in an instance storage volume we recommend encrypting your data for extra protection You can do so by using your own encryption tools or by using third party encryption tools available on the AWS Marketplace 43 Interfaces There is no separate management API for EC2 instance store volumes Instead instance store volumes are specified using the block device mapping feature of the Amazon EC2 API and the AWS Management Console You cannot create or destroy instance store volumes but you can control whether or not they are exposed to the EC2 instance and what device name is mapped to for each volume There is also no separate data API for instance store volumes Just like EBS volumes insta nce store volumes present a block device interface to the EC2 instance To the EC2 instance an instance store volume appears just like a local disk drive To write to and read data from instance store volumes you use the native file system I/O interfaces of your chosen operating system Note that in some cases a local instance store volume device is attached to an EC2 instance upon launch but must be formatted with an appropriate file system and mounted before use Also keep careful track of your block device mappings There is no simple way for an application running on an EC2 instance to determine which block device is an instance store (ephemeral) volume and which is an EBS (persistent) volume ArchivedAmazon Web Services – AWS Storage Services Overview Page 30 Cost Model The cost of an EC2 instance includes any local instance store volumes if the instance type provides them Although there is no additional charge for data storage on local instance store volumes note that data transferred to and from Amazon EC2 instance store volumes from other Availability Zones or outside of an Amazon EC2 Region can incur data transfer charges; additional charges apply for use of any persistent storage such as Amazon S3 Amazon Glacier Amazon EBS volumes and Amazon EBS snapshots You can find pricing information for Amazon EC2 A mazon EBS and data transfer at the Amazon EC2 Pricing web page 44 AWS Storage Gateway AWS Storage Gateway connects an on premises software appliance wi th cloud based storage to provide seamless and secure storage integration between an organization’s on premises IT environment and the AWS storage infrastructure 45 The service enables you to securely store data in the AWS Cloud for scalable and cost effec tive storage AWS Storage Gateway supports industry standard storage protocols that work with your existing applications It provides lowlatency performance by maintaining frequently accessed data on premises while securely storing all of your data encryp ted in Amazon S3 or Amazon Glacier For disaster recovery scenarios AWS Storage Gateway together with Amazon EC2 can serve as a cloud hosted solution that mirrors your entire production environment You can download the AWS Storage Gateway software appl iance as a virtual machine (VM) image that you install on a host in your data center or as an EC2 instance Once you’ve installed your gateway and associated it with your AWS account through the AWS activation process you can use the AWS Management Consol e to create gateway cached volumes gateway stored volumes or a gateway virtual tape library (VTL) each of which can be mounted as an iSCSI device by your on premises applications With gateway cached volumes you can use Amazon S3 to hold your primary data while retaining some portion of it locally in a cache for frequently accessed data Gateway cached volumes minimize the need to scale your on premises storage infrastructure while still providing your applications with low latency access to their freq uently accessed data You can create storage volumes up to 32 ArchivedAmazon Web Services – AWS Storage Services Overview Page 31 TiB in size and mount them as iSCSI devices from your on premises application servers Each gateway configured for gateway cached volumes can support up to 20 volumes and total volume storage of 150 T iB Data written to these volumes is stored in Amazon S3 with only a cache of recently written and recently read data stored locally on your on premises storage hardware Gateway stored volumes store your primary data locally while asynchronously backing up that data to AWS These volumes provide your on premises applications with low latency access to their entire datasets while providing durable off site backups You can create storage volumes up to 1 T iB in size and mount them as iSCSI devices from your on premises application servers Each gateway configured for gateway stored volumes can support up to 12 volumes and total volume storage of 12 T iB Data written to your gateway stored volumes is stored on your on premises storage hardware and a synchronously backed up to Amazon S3 in the form of Amazon EBS snapshots A gateway VTL allows you to perform offline data archiving by presenting your existing backup application with an iSCSI based virtual tape library consisting of a virtual media chang er and virtual tape drives You can create virtual tapes in your VTL by using the AWS Management Console and you can size each virtual tape from 100 G iB to 25 T iB A VTL can hold up to 1500 virtual tapes with a maximum aggregate capacity of 150 T iB On ce the virtual tapes are created your backup application can discover them by using its standard media inventory procedure Once created tapes are available for immediate access and are stored in Amazon S3 Virtual tapes that you need to access frequentl y should be stored in a VTL Data that you don't need to retrieve frequently can be archived to your virtual tape shelf (VTS) which is stored in Amazon Glacier further reducing your storage costs Usage Patterns Organizations are using AWS Storage Gateway to support a number of use cases These use cases include corporate file sharing enabling existing on premises backup applications to store primary backups on Amazon S3 disaster recovery and mirroring data to cloud based compute resources and th en later archiving it to Amazon Glacier ArchivedAmazon Web Services – AWS Storage Services Overview Page 32 Performance Because the AWS Storage Gateway VM sits between your application Amazon S3 and underlying on premises storage the performance you experience depends upon a number of factors These factors include the speed and configuration of your underlying local disks the network bandwidth between your iSCSI initiator and gateway VM the amount of local storage allocated to the gateway VM and the bandwidth between the gateway VM and Amazon S3 For gateway cached volumes to provide low latency read access to your on premises applications it’s important that you provide enough local cache storage to store your recently accessed data The AWS Storage Gateway documentation provides guidance on how to optimize your environment setup for best performance including how to properly size your local storage 46 AWS Storage Gateway efficiently uses your Internet bandwidth to speed up the upload of your on premises application data to AWS AWS Storage Gateway only uploads data that has changed which minimizes the amount of data sent over the Internet To further increase throughput and reduce your network costs you can also use AWS Direct Connect to establish a dedicated network connection between your on premises gateway and AWS 47 Durability and Availability AWS Storage Gateway durably stores your on premises application data by uploading it to Amazon S3 or Amazon Glacier Both of these AWS services store data in multiple facilities and on multiple devices within each facility being designed to provide an average annual durability of 99999999999 percent (11 nines) They also perform regular systematic data integrity checks and are built to be automatically self healing Scalability and Elasticity In both gateway cached and gateway stored volume configurations AWS Storage Gateway stores data in Amazon S3 which has been designed to offer a very high level of scalability and elasticity automatically Unlike a typical file system that can encounter issues when storing large number of files in a directory Amazon S3 supports a virtually unlimited number of files in any bucke t Also unlike a disk drive that has a limit on the total amount of data that can be stored before you must partition the data across drives or servers an Amazon S3 bucket can ArchivedAmazon Web Services – AWS Storage Services Overview Page 33 store a virtually unlimited number of bytes You are able to store any number of objects and Amazon S3 will manage scaling and distributing redundant copies of your information onto other servers in other locations in the same region all using Amazon’s high performance infrastructure In a gateway VTL configuration AWS Storage Ga teway stores data in Amazon S3 or Amazon Glacier providing a virtual tape infrastructure that scales seamlessly with your business needs and eliminates the operational burden of provisioning scaling and maintaining a physical tape infrastructure Securi ty IAM helps you provide security in controlling access to AWS Storage Gateway With IAM you can create multiple IAM users under your AWS account The AWS Storage Gateway API enables a list of actions each IAM user can perform on AWS Storage Gateway 48 The AWS Storage Gateway encrypts all data in transit to and from AWS by using SSL All volume and snapshot data stored in AWS using gateway stored or gateway cached volumes and all virtual tape data stored in AWS using a gateway VTL is encrypted at rest using AES 256 a secure symmetric key encryption standard using 256 bit encryption keys Storage Gateway supports authentication between your gateway and iSCS I initiators by using Challenge Handshake Authentication Protocol (CHAP) Interfaces The AWS Management Console can be used to download the AWS Storage Gateway VM on premises or onto an EC2 instance (an AMI that contains the gateway VM image) You can then select between a gateway cached gateway stored or gateway VTL configuration and activate your storage gateway by associating your gateway’s IP address with your AWS account All the detailed steps for AWS Storage Gateway deployment can be found in Getting Started in the AWS Storage Gateway User Guide 49 The integrated AWS CLI also provides a set of high level Linux like commands for common operations of the AWS Storage Gateway service ArchivedAmazon Web Services – AWS Storage Services Overview Page 34 You can also use the AWS SDKs to develop applications that interact with AWS Storage Gateway The AWS SDKs for Java NET JavaScript Nodejs Ruby PHP and Go wrap the underlying AWS Storage Gateway API to simplify your programming tasks Cost Model With AWS Storage Gateway you pay only for what you use AWS Storage Gateway has the following pricing components: gateway usage (per gateway per month) snapshot storage usage (per GB per month) volume storage usage (per GB per month) virtual tape shelf storage (per GB per month) virtual tape library storage (per GB per month) retrieval from virtual tape shelf (per GB) and data transfer out (per GB per month) You can find pricing information at the AWS Storage Gateway pricing page 50 AWS Snowball AWS Snowball accelerates moving large amounts of data into and out of AWS using secure Snowball appl iances 51 The Snowball appliance is purpose built for efficient data storage and transfer All AWS Regions have 80 TB Snowballs while US Regions have both 50 TB and 80 TB models The Snowball appliance is rugged enough to withstand an 85 G jolt At less than 50 pounds the appliance is light enough for one person to carry It is entirely self contained with a power cord one RJ45 1 GigE and two SFP+ 10 GigE network connections on the back and an E Ink display and control panel on the front Each Snowball appliance is water resistant and dustproof and serves as its own rugged shipping container AWS transfers your data directly onto and off of Snowball storage devices using Amazon’s high speed internal network and bypasses the Internet For datasets of significant size Snowball is often faster than Internet transfer and more cost effective than upgrading your connectivity AWS Snowball supports importing data into and exporting data from Amazon S3 buckets From there the data can be copied or moved to oth er AWS services such as Amazon EBS and Amazon Glacier as desired Usage Patterns Snowball is ideal for transferring anywhere from terabytes to many petabytes of data in and out of the AWS Cloud securely This is especially beneficial in cases ArchivedAmazon Web Services – AWS Storage Services Overview Page 35 where you don’t want to make expensive upgrades to your network infrastructure or in areas whe re high speed Internet connections are not available or cost prohibitive In general if loading your data over the Internet would take a week or more you should consider using Snowball Common use cases include cloud migration disaster recovery data ce nter decommission and content distribution When you decommission a data center many steps are involved to make sure valuable data is not lost and Snowball can help ensure data is securely and cost effectively transferred to AWS In a content distributi on scenario you might use Snowball appliances if you regularly receive or need to share large amounts of data with clients customers or business associates Snowball appliances can be sent directly from AWS to client or customer locations Snowball migh t not be the ideal solution if your data can be transferred over the Internet in less than one week Performance The Snowball appliance is purpose built for efficient data storage and transfer including a high speed 10 Gbps network connection designed to minimize data transfer times allowing you to transfer up to 80 TB of data from your data source to the appliance in 25 days plus shipping time In this case the end toend time to transfer the data into AWS is approximately a week including default s hipping and handling time to AWS data centers Copying 160 TB of data can be completed in the same amount of time by using two 80 TB Snowballs in parallel You can use the Snowball client to estimat e the time it takes to transfer your data (refer to the AWS Import/Export User Guide for more details) 52 In general you can improve your transfer speed from your data source to the Snowball appliance by reducing local network use eliminating unnecessary hops between the Snowball appliance and the workstation using a powerful computer as your workstation and combining smaller objects Parallelization can also help achieve maximum performance of your data transfer This could involve one or more of the following parallelization types: using multiple instances of the Snowball client on a single workstation with a single Snowball appliance; using multiple instances of the Snowball client on multiple workstations with a single ArchivedAmazon Web Services – AWS Storage Services Overview Page 36 Snowball appliance; and/or usi ng multiple instances of the Snowball client on multiple workstations with multiple Snowball appliances Durability and Availability Once the data is imported to AWS the durability and availability characteristics of the target storage applies Amazon S3 is designed for 99999999999 percent (11 nines) durability and 9999 percent availability Scalability and Elasticity Each AWS Snowball appliance is capable of storing 50 TB or 80 TB of data If you want to transfer more data than that you can use multipl e appliances For Amazon S3 individual files are loaded as objects and can range up to 5 TB in size but you can load any number of objects in Amazon S3 The aggregate total amount of data that can be imported is virtually unlimited Security You can integrate Snowball with IAM to control which actions a user can perform 53 You can give the IAM users on your AWS account access to all Snowball actions or to a subse t of them Similarly an IAM user that creates a Snowball job must have permissions to access the Amazon S3 buckets that will be used for the import operations For Snowball AWS KMS protects the encryption keys used to protect data on each Snowball appliance All data loaded onto a Snowball appliance is encrypted using 256 bit encryption Snowball is physically secured by using an industry standard Trusted Platform Module (TPM) that uses a dedicated processor designed to detect any unauthorized modifications to the hardware firmware or software Snowball is included in the AWS HIPAA compliance program so you can use Snowball to transfer large amounts of Protected Health Information (PHI) data into and out of AWS 54 ArchivedAmazon Web Services – AWS Storage Services Overview Page 37 Interfaces There are two ways to get started with Snowball You can create an import or export job using the AWS Snowball Management Console or you can use the Snowball Job Management API and integrate AWS Snowball as a p art of your data management solution The primary functions of the API are to create list and describe import and export jobs and it uses a simple standards based REST web services interface For more details around using the Snowball Job Management API see the API Reference documentation 55 You also have two ways to locally transfer data between a Snowball appliance and your on premises data center The Snowball c lient available as a download from the AWS Import/Export Tools page is a standalone terminal application that you run on your local workstation to do your data transfer 56 You use simple copy (cp ) commands to transfer data and handling errors and logs are written to your local workstation for troubleshooting and auditing The second option to locally transfer data between a Snowball appliance and your on premises data center is the Amazon S3 Adap ter for Snowball which is also available as a download from the AWS Import/Export Tools page You can programmatically transfer data between your on premises data center and a Snowball appliance u sing a subset of the Amazon S3 REST API commands This allows you to have direct access to a Snowball appliance as if it were an Amazon S3 endpoint Below is an example of how you would reference a Snowball appliance as an Amazon S3 endpoint when executing an AWS CLI S3 list command By default the adapter runs on port 8080 but a different port can be specified by changing the adapterconfig file The following example steps you through how to implement a Snowball appliance to import your data into AW S using the AWS Snowball Management Console 1 To start sign in to the AWS Snowball Management Console and create a job 2 AWS then prepares a Snowball appliance for your job 3 The Snowball appliance is shipped to you through a regional shipping carrier (UPS in all AWS regions except India which uses Amazon Logistics) You can find your tracking number and a link to the tracking website on the AWS Snowball Management Console ArchivedAmazon Web Services – AWS Storage Services Overview Page 38 4 A few days later the regional shipping carrier delivers the Snowball appliance to the address you provided when you created the job 5 Next get ready to transfer your data by downloading your credentials your job manifest and the manifest’s unlock code from the AWS Management Console and by downloading the Snowball client The Sno wball client is the tool that you’ll use to manage the flow of data from your on premises data source to the Snowball appliance 6 Install the Snowball client on the computer workstation that has your data source mounted on it 7 Move the Snowball appliance in to your data center open it and connect it to power and your local network 8 Power on the Snowball appliance and start the Snowball client You provide the IP address of the Snowball appliance the path to your manifest and the unlock code The Snowball client decrypts the manifest and uses it to authenticate your access to the Snowball appliance 9 Use the Snowball client to transfer the data that you want to import into Amazon S3 from your data source into the Snowball appliance 10 After your data transfer is complete power off the Snowball appliance and unplug its cables The E Ink shipping label automatically updates to show the correct AWS facility to ship to You can track job status by using Amazon SNS text messages or directly in the console 11 The regional shipping carrier returns the Snowball appliance to AWS 12 AWS gets the Snowball appliance and imports your data into Amazon S3 On average it takes about a day for AWS to begin importing your data into Amaz on S3 and the import can take a few days If there are any complications or issues we contact you through email Once the data transfer job has been processed and verified AWS performs a software erasure of the Snowball appliance that follows the Nation al Institute of Standards and Technology (NIST) 800 88 guidelines for media sanitization Cost Model With Snowball as with most other AWS services you pay only for what you use Snowball has three pricing components: service fee (per job) extra day char ges as required (the first 10 days of onsite usage are free) and data transfer For the ArchivedAmazon Web Services – AWS Storage Services Overview Page 39 destination storage the standard Amazon S3 storage pricing applies You can find pricing information at the AWS Snowball Pricing page 57 Amazon CloudFront Amazon CloudFront is a content delivery web service that speeds up the distribution of your website’s dynamic static and streaming content by making it available from a global network of edge locations 58 When a user requests content that you’re serving with Amazon Cloud Front the user is routed to the edge location that provides the lowest latency (time delay) so content is delivered with better performance than if the user had accessed the content from a data center farther away If the content is already in the edge l ocation with the lowest latency Amazon CloudFront delivers it immediately If the content is not currently in that edge location Amazon CloudFront retrieves it from an Amazon S3 bucket or an HTTP server (for example a web server) that you have identifie d as the source for the definitive version of your content Amazon CloudFront caches content at edge locations for a period of time that you specify Amazon CloudFront supports all files that can be served over HTTP These files include dynamic web pages such as HTML or PHP pages and any popular static files that are a part of your web application such as website images audio video media files or software downloads For on demand media files you can also choose to stream your content using Real Time Messaging Protocol (RTMP) delivery Amazon CloudFront also supports delivery of live media over HTTP Amazon CloudFront is optimized to work with other Amazon web services such as Amazon S3 Amazon EC2 Elastic Load Balancing and Amazon Route 53 Amazon CloudFront also works seamlessly with any non AWS origin servers that store the original definitive versions of your files Usage Patterns CloudFront is ideal for distribution of frequently accessed static content that benefits from edge delivery such as popular website images videos media files or software downloads Amazon CloudFront can also be used to deliver dynamic web applications over HTTP These applications can include static content dynamic content or a whole site with a mixture of the two Amazon CloudFront is also commonly used to stream audio and video files to web browsers and mobile ArchivedAmazon Web Services – AWS Storage Services Overview Page 40 devices To get a better understanding of your end user usage patterns you can use Amazon CloudFront reports 59 If you need to remove an object from Amazon CloudFront edge server caches before it expires you can either invalidate the object or use object versioning to serve a different version of the object that has a different name 60 61 Additionally it might be better to serve infrequently accessed data directly from the origin server avoiding the additional cost of origin fetches for data that is not likely to be reused at the edge; however origin fetches to Amazon S3 are free Performance Amazon CloudFront is designed for low latency and high bandwidth delivery of content Amazon CloudFront speeds up the distribution of your content by routing end users to the edge location that can best serve each end user’s request in a worldwide network of edge locations T ypically requests are routed to the nearest Amazon CloudFront edge location in terms of latency This approach dramatically reduces the number of networks that your users’ requests must pass through and improves performance Users get both lower latency —here latency is the time it takes to load the first byte of an object —and the higher sustained data transfer rates needed to deliver popular objects at scale Durability and Availability Because a CDN is an edge cache Amazon CloudFront does not provide dur able storage The origin server such as Amazon S3 or a web server running on Amazon EC2 provides the durable file storage needed Amazon CloudFront provides high availability by using a distributed global network of edge locations Origin requests from t he edge locations to AWS origin servers (for example Amazon EC2 Amazon S3 and so on) are carried over network paths that Amazon constantly monitors and optimizes for both availability and performance This edge network provides increased reliability and availability because there is no longer a central point of failure Copies of your files are now held in edge locations around the world Scalability and Elasticity Amazon CloudFront is designed to provide seamless scalability and elasticity You can easi ly start very small and grow to massive numbers of global ArchivedAmazon Web Services – AWS Storage Services Overview Page 41 connections With Amazon CloudFront you don’t need to worry about maintaining expensive web server capacity to meet the demand from potential traffic spikes for your content The service automatica lly responds as demand spikes and fluctuates for your content without any intervention from you Amazon CloudFront also uses multiple layers of caching at each edge location and collapses simultaneous requests for the same object before contacting your origin server These optimizations further reduce the need to scale your origin infrastructure as your website becomes more popular Security Amazon CloudFront is a very secure service to distribute your data It integrates with IAM so that you can create us ers for your AWS account and specify which Amazon CloudFront actions a user (or a group of users) can perform in your AWS account You can configure Amazon CloudFront to create log files that contain detailed information about every user request that Amazo n CloudFront receives These access logs are available for both web and RTMP distributions 62 Additionally Amazon CloudFront integrates with Amazon CloudWatch metrics so that you can monitor your website or application 63 Interfaces You can manage and configure Amazon CloudFront in several ways T he AWS Management Console provides an easy way to manage Amazon CloudFront and supports all features of the Amazon CloudFront API For example you can enable or disable distributions configure CNAMEs and enable end user logging using the console You ca n also use the Amazon CloudFront command line tools the native REST API or one of the supported SDKs There is no data API for Amazon CloudFront and no command to preload data Instead data is automatically pulled into Amazon CloudFront edge locations o n the first access of an object from that location Clients access content from CloudFront edge locations either using HTTP or HTTPs from locations across the Internet; these protocols are configurable as part of a given CloudFront distribution ArchivedAmazon Web Services – AWS Storage Services Overview Page 42 Cost Model With Amazon CloudFront there are no long term contracts or required minimum monthly commitments —you pay only for as much content as you actually deliver through the service Amazon CloudFront has two pricing components: regional data transfer out (p er GB) and requests (per 10000) As part of the Free Usage Tier new AWS customers don’t get charged for 50 GB data transfer out and 2000000 HTTP and HTTPS requests each month for one year Note that if you use an AWS service as the origin (for example Amazon S3 Amazon EC2 Elastic Load Balancing or others) data transferred from the origin to edge locations (ie Amazon CloudFront “origin fetches”) will be free of charge For web distributions data transfer out of Amazon CloudFront to your origin server will be billed at the “Regional Data Transfer Out of Origin” rates CloudFront provides three different price classes according to where your content needs to be distributed If you don’t need your content to be distributed globally but only within certain locations such as the US and Europe you can lower the prices you pay to deliver by choosing a price class that includes only these locations Although there are no long term contracts or required minimum monthly commitments CloudFront offers an o ptional reserved capacity plan that gives you the option to commit to a minimum monthly usage level for 12 months or longer and in turn receive a significant discount You can find pricing information at the Amazon CloudFront pricing page 64 Conclusion Cloud storage is a critic al component of cloud computing because it holds the information used by applications Big data analytics data warehouses Internet of Things databases and backup and archiv e applications all rely on some form of data storage architecture Cloud storage is typically more reliable scalable and secure than traditional on premises storage systems AWS offers a complete range of cloud storage services to support both applicati on and archival compliance requirements This whitepaper provides guidance for understanding the different storage services and features available in the AWS Cloud Usage pat terns performance durability ArchivedAmazon Web Services – AWS Storage Services Overview Page 43 and availability scalability and elasticity security interface and cost models are outlined and described for these cloud storage service s While t his gives you a better understanding of the features and characteristics of these cloud services it is crucial for you to understand your workloads and requirements then decide which storage service is best suited for your needs Contributors The following individuals contributed to this document: • Darryl S Osborne Solutions Architect Amazon Web Services • Shruti Worlikar Solutions Archi tect Amazon Web Services • Fabio Silva Solutions Architect Amazon Web Services ArchivedAmazon Web Services – AWS Storage Services Overview Page 44 References and Further Reading AWS Storage Services • Amazon S3 65 • Amazon Glacier 66 • Amazon EFS 67 • Amazon EBS 68 • Amazon EC2 Instance Store 69 • AWS Storage Gateway 70 • AWS Snowball 71 • Amazon CloudFront 72 Other Resources • AWS SDKs IDE Toolkits and Command Line Tools 73 • Amazon Web Services Simple Monthly Calculator 74 • Amazon Web Services Blog 75 • Amazon Web Services Forums 76 • AWS Free Usage Tier 77 • AWS Case Studies 78 Notes 1 https://awsamazoncom/s3/ 2 https://docsawsamazoncom/AmazonS3/latest/dev/crrhtml 3 http://docsawsamazoncom/AmazonS3/latest/dev/uploadobjusingmpuhtml ArchivedAmazon Web Services – AWS Storage Services Overview Page 45 4 http://docsawsamazoncom/AmazonS3/latest/dev/access control overviewhtml#access control resources manage permissions basics 5 http://docsawsamazoncom/AmazonS3/latest/dev/serv sideencryptionhtml 6 http://docsawsamazoncom/AmazonS3/latest/dev/UsingClientSideEncryptio nhtml 7 http://docsawsamazoncom/AmazonS3/latest/dev/Versioninghtml#MultiFac torAuthenticationDelete 8 http://docsawsamazoncom/AmazonS3/latest/dev/ServerLogsh tml 9 http://awsamazoncom/sns/ 10 http://awsamazoncom/sqs/ 11 http://awsamazoncom/lambda/ 12 http://awsamazoncom/free/ 13 http://awsamazoncom/s3/pricing/ 14 http://awsamazoncom/glacier/ 15 http://docsawsamazoncom/amazonglacier/latest/dev/uploading archive mpuhtml 16 http://docsawsamazoncom/amazonglacier/latest/dev/downloading an archivehtml#downloading anarchive range 17 https://awsamazoncom/iam/ 18 http://awsamazoncom/cloudtrail/ 19 http://docsawsamazoncom/AmazonS3/latest/dev/object lifecycle mgmthtml 20 http://awsamazoncom/glacier/pricing/ 21 http://awsamazoncom/efs/ 22 http://docsawsamazoncom/efs/latest/ug/how itworkshtml 23 http://docsawsamazoncom/efs/latest/ug/monito ringcloudwatchhtml#efs metrics ArchivedAmazon Web Services – AWS Storage Services Overview Page 46 24 http://docsawsamazoncom/efs/latest/ug/mounting fshtml 25 http://docsawsamazoncom/efs/latest/ug/mounting fsmount cmd generalhtml 26 http://docsawsamazoncom/efs/latest/ug/security considerationshtml 27 http://aws amazoncom/efs/pricing/ 28 http://awsamazoncom/ebs/ 29 https://awsamazoncom/ebs/previous generation/ 30 http://docsawsamazoncom/AWSEC2/latest/UserGuide/EBSVolumeTypesht ml#monitoring_burstbucket 31 http://docsawsamazoncom/AWSEC2/latest/UserGuide/EBSVolumeTypesht ml#monitoring_burstbucket 32 http://docsawsamazoncom/AWSEC2/latest/UserGuide/EBSVolumeTypesht ml#monitoring_burstbucket 33 http://docsawsamazoncom/AWSEC2/latest/UserGuide/ebs initializ ehtml 34 https://awsamazoncom/ebs/details/ 35 https://awsamazoncom/kms/ 36 http://docsawsamazoncom/AWSEC2/latest/UserGuide/EBSEncryptionhtml 37 http://awsamazoncom/ebs/pricing/ 38 http://docsawsamazoncom/AWSEC2/latest/UserGuide/InstanceStoragehtm l 39 http://docsawsamazoncom/AWSEC2/latest/WindowsGuide/block device mapping conceptshtml 40 http://docsawsamazoncom/AWSEC2/latest/UserGuide/i2 instanceshtml 41 http://docsawsamazoncom/AWSEC2/latest/UserGuide/high_storage_instan ceshtml ArchivedAmazon Web Services – AWS Storage Services Overview Page 47 42 http://docsawsamazoncom/AWSEC2/latest/UserGuide/ec2 instance lifecyclehtml 43 https://awsamazoncom/marketplace 44 http://awsamazoncom/ec2/pricing/ 45 http://awsamazoncom/storagegateway/ 46 http://docsawsamazoncom/storagegateway/latest/userguide/Wh atIsStorage Gatewayhtml 47 http://awsamazoncom/directconnect/ 48 http://docsawsamazoncom/storagegateway/latest/userguide/AWSStorageGa tewayAPIhtml 49 http://docsawsamazoncom/storagegateway/latest/userguide/GettingStarted commonhtml 50 http://awsamazoncom/storagegateway/pricing/ 51 https://awsamazoncom/importexport/ 52 http://awsamazoncom/importexport/tools/ 53 http://docsawsamazoncom/AWSImportE xport/latest/DG/auth access controlhtml 54 https://awsamazoncom/about aws/whats new/2016/11/aws snowball now ahipaa eligible service/ 55 https://docsawsamazoncom/AWSImportExport/latest/ug/api referencehtml 56 https://awsamazoncom/importexport/tools/ 57 http://awsamazoncom/importexport/pricing/ 58 http://awsamazoncom/cloudfront/pricing/ 59 http://docsawsamazoncom/AmazonCloudFront/latest/DeveloperGuide/repo rtshtml ArchivedAmazon Web Services – AWS Storage Services Overview Page 48 60 http://docsawsamazoncom/AmazonCloudFront/latest/DeveloperGuide/Inva lidationhtm l 61 http://docsawsamazoncom/AmazonCloudFront/latest/DeveloperGuide/Repl acingObjectshtml 62 http://docsawsamazoncom/AmazonCloudFront/latest/DeveloperGuide/Acce ssLogshtml 63 http://docsawsamazoncom/AmazonCloudFront/latest/DeveloperGuide/mon itoring using cloudwatchhtml 64 http://awsamazoncom/cloudfront/pricing/ 65 http://awsamazoncom/s3/ 66 http://awsamazoncom/glacier/ 67 http://awsamazoncom/efs/ 68 http://awsamazoncom /ebs/ 69 http://docsawsamazoncom/AWSEC2/latest/UserGuide/InstanceStoragehtm l 70 http://awsamazoncom/storagegateway/ 71 http://awsamazoncom/ snowball 72 http://awsamazoncom/cloudfront/ 73 http://awsamazoncom/tools/ 74 http://calculators3amazonawscom/indexhtml 75 https://awsamazoncom/blogs/aws/ 76 https://forumsawsamazoncom/indexjspa 77 http://awsamazoncom/free/ 78 http://awsamazoncom/solutions/case studies/ Archived
General
Big_Data_Analytics_Options_on_AWS
ArchivedBig Data Analytics Options on AWS December 2018 This paper has been archived For the latest technical information see https://docsawsamazoncom/whitepapers/latest/bigdata analyticsoptions/welcomehtmlArchived © 2018 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or l icensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Contents Introduction 5 The AWS Advantage in Big Data Analytics 5 Amazon Kinesis 7 AWS Lambda 11 Amazon EMR 14 AWS Glue 20 Amazon Machine Learning 22 Amazon DynamoDB 25 Amazon Redshift 29 Amazon Elasticsearch Service 33 Amazon QuickSight 37 Amazon EC2 40 Amazon Athena 42 Solving Big Data Problems on AWS 45 Example 1: Queries against an Amazon S3 Data Lake 47 Example 2: Capturing and Analyzing Sensor Data 49 Example 3: Sentiment Analysis of Social Media 52 Conclusion 54 Contributors 55 Further Reading 55 Document Rev isions 56 Archived Abstract This whitepaper helps architects data scientists and developers understand the big data analytics options available in the AWS cloud by providing an overview of services with the following information: • Ideal usage patterns • Cost model • Performance • Durability and availability • Scalability and elasticity • Interfaces • Anti patterns This paper concludes with scenarios that showcase the an alytics options in use as well as additional resources for getting started with big data analytics on AWS ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 5 of 56 Introduction As we become a more digital society the amount of data being created and collected is growing and accelerating significantly Analysis of this ever growing data becomes a challenge with traditional analytical tools We require innovation to bridge the gap between data being generated and data that can be analyzed effectively Big data tools and technologies offer opportunities and challenges in being able to analyze data efficiently to better understand customer preferences gain a competitive advantage in the marketplace and grow your business Data management architectures have evolved from the traditional data warehousing model to more complex architectures that address more requirements such as realtime and batch processing; structured and unstructured data; high velocity transactions; and so on Amazon Web Services (AWS) provides a broad platform of managed services to help you build secure and seamlessly scale endtoend big data applications quickly and with ease Whether your applications require realtime streaming or batch data processing AWS provides the infrastructure and tools to tackle your next big data project No hardware to procure no infrastructure to maintain and scale —only what you need to collect store process and analyze big data AWS has an ecosystem of analytical solutions specifically designed to handle this growing amount of data and provide insight into your business The AWS Advantage in Big Data Analytics Analyzing large data sets requires significant compute capacity that can vary in size based on the amount of input data and the type of analysis This characteristic of big data workloads is ideally suited to the payasyougo cloud computing model where applications can easily scale up and down based on demand As requirements change you can easily resize your environment (horizontally or vertically) on AWS to meet your needs without having to wait for additional hardware or being required to over invest to provision enough capacity For mission critical applications on a more traditional infrastructure system designers have no choice but to over provision because a surge in additional data due to an increase in business need must be something the system can ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 6 of 56 handle By contrast on AWS you can provision more capacity and compute in a matter of minutes meaning that your big data applications grow and shrink as demand dictates and your system runs as close to optimal efficiency as possible In addition you get flexible computing on a global infrastructure with access to the many different geographic regions that AWS offers along with the ability to use other scalable services that augment to build sophisticated big data applications These other services include Amazon Simple Storage Service (Amazon S3) to store data and AWS Glue to orchestrate jobs to move and transform that data easily AWS IoT which lets connected devices interact with cloud applications and other connected devices As the amount of data being generated continues to grow AWS has many options to get that data to the cloud including secure devices like AWS Snowball to accelerate petabyte scale data transfers delivery s treams with Amazon Kinesis Data Firehose to load streaming data continuously migrating databases using AWS D atabase Migration Service and scalable p rivate connections through AWS Direct Connect AWS recently added AWS Snowball Edge which is a 100 TB data transfer device with on board storage and compute capabilities You can use Snowball Edge to move large amounts of data into and out of AWS as a temporary storage tier for large local datasets or to support local workloads in remote or offline locations Additionally you can deploy AWS Lambda code on Snowball Edge to perform tasks such as analyzing data streams or processing data locally As mobile continues to rapidly grow in usage you can use the suite of services within the AWS Mobil e Hub to collect and measure app usage and data or export that data to another service for further custom analysis These capabilities of the AWS platform make it an ideal fit for solving big data problems and many customers have implemented successful big data analytics workloads on AWS For more information about case studies see Big Data Customer Success Stories The following services for collecting processing stori ng and analyzing big data are described in order : • Amazon Kinesis ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 7 of 56 • AWS Lambda • Amazon Elastic MapReduce • Amazon Glue • Amazon Machine Learning • Amazon DynamoDB • Amazon Redshift • Amazon Athena • Amazon Elasticsearch Service • Amazon QuickSight In addition to these services Amazon EC2 instances are available for self managed big data applications Amazon Kinesis Amazon Kinesis is a platform for streaming data on AWS making it easy to load and analyze streaming data and also providing the ability for you to build custom streaming data applications for specialized needs With Kinesis you can ingest real time data such as application logs website clickstreams IoT telemetry data and more into your databases data lakes and data warehouses or build y our own real time applications using this data Amazon Kinesis enables you to process and analyze data as it arrives and respond in real time instead of having to wait until all your data is collected before the processing can begin Currently there are 4 pieces of the Kinesis platform that can be utilized based on your use case : • Amazon Kinesis Data Streams enables you to build custom applications that process or analyze streaming data • Amazon Kinesis Video Streams enables you to build custom applications that process or analyze streaming video • Amazon Kinesis Data Firehose enables you to deliver real time streaming data to AWS destinations such as Amazon S3 Amazon Redshift Amazon Kinesis Analytics and Amazon Elasticsearch Service • Amazon Kinesis Data Analytics enables you to process and analyze streaming data with standard SQL ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 8 of 56 Kinesis Data Streams and Kinesi s Video Streams enable you to build custom applications that process or analyze streaming data in real time Kinesis Data Streams can continuously capture and store terabytes of data per hour from hundreds of thousands of sources such as website clickstreams financial transactions social media feeds IT logs and location tracking events Kinesis Video Streams can continuously capture video data from smartphones security cameras drones satellites dashcams and other edge devices With the Amazon Kinesis Client Library (KCL) you can build Amazon Kinesis applications and use streaming data to power real time dashboards generate alerts and implement dynamic pricing and advertising You can also emit data from Kinesis Data Streams and Kinesis Video Streams to other AWS services such as Amazon Simple Storage Service (Amazon S3) Amazon Redshift Amazon Elastic MapReduce (Amazon EMR) and AWS Lambda Provision the level of input and output required for your data stream in blocks of 1 megabyte per second (MB/sec) using the AWS Management Console API or SDK s The size of your stream can be adjusted up or down at any time without restarting the stream and without any impact on the data sources pushing data to the stream Within seconds data put into a stream is available for analysis With Kinesis Data Firehose you do not need to write applications or manage resources You configure your data producers to send data to Kinesis Firehose and it automatically delivers the data to the AWS destination that you specified You can also configure Kinesis Data Firehose t o transform your data before data delivery It is a fully managed service that automatically scales to match the throughput of your data and requires no ongoing administration It can also batch compress and encrypt the data before loading it minimizing the amount of storage used at the destination and increasing security Amazon Kinesis Data Analytics is the easiest way to process and analyze real time streaming data With Kinesis Data Anal ytics you just use standard SQL to process your data streams so you don’t have to learn any new programming languages Simply point Kinesis Data Analytics at an incoming data stream write your SQL queries and specify where you want to load the results Kinesis Data Analytics takes care of running your SQL queries continuously on data while it’s in transit and sending the results to the destinations ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 9 of 56 In the subsequent sections we will focus primarily on Amazon Kinesis Data Streams Ideal Usage Patterns Amazon Kinesis Data Steams is useful wherever there is a need to move data rapidly off producers (data sources) and continuously process it That processing can be to transform the data before emitting into another data store drive realtime metrics and analytics or derive and aggregate multiple streams into more complex streams or downstream processing The following are typical scenarios for using Kinesis Data Streams for analytics • Real time data analytics –Kinesis Data Streams enables realtime data analytics on streaming data such as analyzing website clickstream data and customer engagement analytics • Log and data feed intake and processing – With Kinesis Data Streams you can have producers push data directly into an Amazon Kinesis stream For example you can submit system and application logs to Kinesis Data Streams and access the stream for processing within seconds This prevents the log data from being lost if the front end or application server fails and reduces local log storage on the source Kinesis Data Streams provides accelerated data intake because you are not batching up the data on the servers before you submit it for intake • Real time metrics and reporting – You can use data ingested into Kinesis Data Streams for extracting metrics and generating KPIs to power reports and dashboards at realtime speeds This enables data processing application logic to work on data as it is streaming in continuously rather than wait for data batches to arrive Cost Model Amazon Kinesis Data Streams has simple payasyougo pricing with no up front costs or minimum fees and you only pay for the resources you consume An Amazon Kinesis stream is made up of one or more shards each shard gives you a capacity 5 read transactions per second up to a maximum total of 2 MB of data read per second Each shard can support up to 1000 write transactions per second and up to a maximum total of 1 MB data written per second The data capacity of your stream is a function of the number of shards that you specify for the stream The total capacity of the stream is the sum of the capacity ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 10 of 56 of each shard There are just two pricing components an hourly charge per shard and a charge for each 1 million PUT transactions For more information see Amazon Kinesis Data Streams Pricing Applications that run on Amazon EC2 and process Amazon Kinesis streams also incur standard Amaz on EC2 costs Performance Amazon Kinesis Data Streams allows you to choose throughput capacity you require in terms of shards With each shard in an Amazon Kinesis stream you can capture up to 1 megabyte per second of data at 1000 write transactions per second Your Amazon Kinesis applications can read data from each shard at up to 2 megabytes per second You can provision as many shards as you need to get the throughput capacity you want; for instance a 1 gigabyte per second data stream would require 1024 shards Durability and Availability Amazon Kinesis Data Streams synchronously replicates data across three Availability Zones in an AWS Region providing high availability and data durability Additionally you can store a cursor in DynamoDB to durably track what has been read from an Amazon Kinesis stream In the event that your application fails in the middle of reading data from the stream you can restart your application and use the cursor to pick up from the exact spot where the failed application left off Scalability and Elasticity You can increase or decrease the capacity of the stream at any time according to your business or operational needs without any interruption to ongoing stream processing By using API calls or development tools you can automate scaling of your Amazon Kinesis Data Streams environment to meet demand and ensure you only pay for what you need Interfaces There are two interfaces to Kinesis Data Streams: input which is used by data producers to put data into Kinesis Data Streams; and output to process and analyze data that comes in Producers can write data using the Amazon Kinesis PUT API an AWS Software Development Kit (SDK) or toolkit abstraction the Amazon Kinesis Producer Library (KPL) or the Amazon Kinesis Agent ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 11 of 56 For processing data that has already been put into an Amazon Kinesis stream there are client libraries provided to build and operate realtime streaming data processing applications The KCL17 acts as an intermediary between Amazon Kinesis Data Streams and your business applications which contain the specific processing logic There is also integration to read from an Amazon Kinesis stream into Apache Storm via the Amazon Kinesis Storm Spout AntiPatterns Amazon Kinesis Data Streams has the following antipatterns: • Small scale consistent throughput – Even though Kinesis Data Streams works for streaming data at 200 KB/sec or less it is designed and optimized for larger data throughputs • Long term data storage and analytics –Kinesis Data Streams is not suited for long term data storage By default data is retained for 24 hours and you can extend the retention period by up to 7 days You can move any data that needs to be stored for longer than 7 days into another durable storage service such as Amazon S3 Amazon Glacier Amazon Redshift or DynamoDB AWS Lambda AWS Lambda lets you run code without provisioning or managing servers You pay only for the compute time you consume – there is no charge when your code is not running With Lambda you can run code for virtually any type of application or backend service – all with zero administration Just upload your code and Lambda takes care of everything required to run and scale your code with high availability You can set up your code to automati cally trigger from other AWS services or call it directly from any web or mobile app Ideal Usage Pattern AWS Lambda enables you to execute code in response to triggers such as changes in data shifts in system state or actions by users Lambda can be directly triggered by AWS services such as Amazon S3 DynamoDB Amazon Kinesis Data Streams Amazon Simp le Notification Service (Amazon SNS ) and ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 12 of 56 CloudWatch allowing you to build a variety of real time data processing systems • Real time File Processing – You can trigger Lambda to invoke a process where a file has been uploaded to Amazon S3 or modified For example to change an image from color to gray scale after it has been uploaded to Amazon S3 • Real time Stream Processing – You can use Kinesis Data Streams and Lambda to process streaming data for click stream analysis log filtering and social media analysis • Extract Transform Load – You can use Lambda to run code that transforms data and loads that data into one data rep ository to another • Replace Cron – Use schedule expressions to run a Lambda function at regular intervals as a cheaper and more available solution than running cron on an EC2 instance • Process AWS Events – Many other services such as AWS CloudTrail can act as event sources simply by logging to Amazon S3 and using S3 bucket notifications to trigger Lambda functions Cost Model With AWS Lambda you only pay for what you use You are charged based on the number of requests for your functions and the time you r code executes The Lambda free tier includes 1M free requests per month and 400000 GB seconds of compute time per month You are charged $020 per 1 million requests thereafter ($00000002 per request) Additionally the duration of your code executing is priced in relation to memory allocated You are charged $000001667 for every GB second used See Lambda pricing for more details Performance After deploying your code into Lambda for the first time your functions are typically ready to call within seconds of upload Lambda is designed to process events within milliseconds Latency will be higher immediately after a Lambda function is created updated or if it has not been used recently To improve performance Lambda may choose to retain an instance of your function and reuse it to serve a subsequent request rather than creating a new copy To learn more about how Lambda reuses function insta nces see our documentation Your code should not assume that this reuse will always happen ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 13 of 56 Durability and Availability AWS Lambda is designed to use replica tion and redundancy to provide high availability for both the service itself and for the Lambda functions it operates There are no maintenance windows or scheduled downtimes for either On failure Lambda functions being invoked synchronously respond with an exception Lambda functions being invoked asynchronously are retried at least 3 times after which the event may be rejected Scalability and Elasticity There is no limit on the number of Lambda functions that you can run However Lambda has a default safety throttle of 1000 concurrent executions per account per region A member of the AWS support team can increase this limit Lambda is designed to scale automatically on your behalf T here are no fundamental limits to scaling a function Lambda dynamically allocate s capacity to match the rate of incoming events Interfaces Lambda functions can be managed in a variety of ways You can easily list delete update and monitor your Lambda functions using the dashboard in the Lambda console You also can use the AWS CLI and AWS SDK to manage your Lambda functions You can trigger a Lambda function from an AWS event such as Amazon S3 bucket notifications Amazon DynamoDB Streams Amazon Clo udWatch logs Amazon Simple Email Service (Amazon SES) Amazon Kinesis Data Streams Amazon SNS Amazon Cognito and more Any API call in any service that supports AWS CloudTrail can be processed as an event in Lambda by responding to CloudTrail audit logs For more information about event sources see Core Components: AWS Lambda Function and Event Sources AWS Lambda supports code written in Nodejs (JavaScript) Python Java (Java 8 compatible) C# (NET Core) Go PowerShell and Ruby Your code can include existing libraries even native ones Please read our documentation on using Nodejs Python Java C# Go PowerShell and Ruby ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 14 of 56 AntiPatterns • Long Running Applications – Each Lambda function must complete within 900 seconds For long running applications that may require jobs to run longer than fi fteen minutes Amazon EC2 is recommended Alternately create a chain of Lambda functions where function 1 call s function 2 which calls function 3 and so on until the process is completed See Creating a Lambda State Machine for more information • Dynamic Websites – While it is possible to run a static website with AWS Lambda running a highly dynamic and large volume website can be performance proh ibitive Utilizing Amazon EC2 and Amazon CloudFront would be a recommended use case • Stateful Applications –Lambda code must be written in a “stateless” style ie it should assume there is no affinity to the underlying compute infrastructure Local file system access child processes and similar artifacts may not extend beyond the lifetime of the request and any persistent state should be stored in Amazon S3 DynamoDB or another Internet available storage service Amazon EMR Amazon EMR is a highly distributed computing framework to easily process and store data quickly in a costeffective manner Amazon EMR uses Apache Hadoop an open source framework to distribute your data and processing across a resizable cluster of Amazon EC2 instances and allows you to use the most common Hadoop tools such as Hive Pig Spark and so on Hadoop provides a framework to run big data processing and analytics Amazon EMR does all the work involved with provisioning managing and maintaining the infrastructure and software of a Hadoop cluster Ideal Usage Patterns Amazon EMR’s flexible framework reduces large processing problems and data sets into smaller jobs and distributes them across many compute nodes in a Hadoop cluster This capability lends itself to many usage patterns with big data analytics Here are a few examples: ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 15 of 56 • Log processing and analytics • Large extract transform and load (ETL) data movement • Risk modeling and threat analytics • Ad targeting and click stream analytics • Genomics • Predictive analytics • Ad hoc data mining and analytics For more information see the documentation for Amazon EMR Cost Model With Amazon EMR you can launch a persistent cluster that stays up indefinitely or a temporary cluster that terminates after the analysis is complete In either scenario you only pay for the hours the cluster is up Amazon EMR supports a variety of Amazon EC2 instanc e types (standard high CPU high memory high I/O and so on) and all Amazon EC2 pricing options (OnDemand Reserved and Spot) When you launch an Amazon EMR cluster (also called a "job flow") you choose how many and what type of Amazon EC2 instances to provision The Amazon EMR price is in addition to the Amazon EC2 price For more information see Amazon EMR Pricing Performance Amazon EMR performance is driven by the type of EC2 instances you choose to run your cluster on and how many you chose to run your analytics You should choose an instance type suitable for your processing requirements with sufficient memory storage and processing power For more information about EC2 instance specifications see Amazon EC2 Instance Types Durability and Availability By default Amazon EMR is fault tolerant for core node failures and continues job execution if a slave node goes down Amazon EMR will also provision a new node when a core node fails However Amazon EMR will not replace nodes if all nodes in the cluster are lost Customers can monitor the health of nodes and replace failed nodes with CloudWatch ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 16 of 56 Amazon EMR is fault tolerant for slave failures and continues job execution if a slave node goes down Amazon EMR will also provision a new node when a core node fails However Amazon EMR will not replace nodes if all nodes in the cluster are lost Scalability and Elasticity With Amazon EMR it is easy to resize a running cluster You can add core nodes which hold the Hadoop Distributed File System (HDFS) at any time to increase your proce ssing power and increase the HDFS storage capacity (and throughput) Additionally you can use Amazon S3 natively or using EM RFS along with or instead of local HDFS which allows you to decouple your memory and compute from your storage providing greater flexibility and cost efficiency You can also add and remove task nodes at any time which can process Hadoop jobs but do not maintain HDFS Some customers add hundreds of instances to their clusters when their batch processing occurs and remove the extra instances when processing completes For example you may not know how much data your clusters will be handling in 6 months or you may have spikey processing needs With Amazon EMR you don't need to guess your future requirements or provision for peak demand because you can easily add or remove capacity at any time Additionally you can add all new clusters of various sizes and remove them at any time with a few clicks in the console or by a programmatic API call Interfaces Amazon EMR supports many tools on top of Hadoop that can be used for big data analytics and each has their own interfaces Here is a brief summary of the most popular options: Hive Hive is an open source data warehouse and analytics package that runs on top of Hadoop Hive is operated by Hive QL a SQL based language which allows users to structure summarize and query data Hive QL goes beyond standard SQL adding firstclass support for map/reduce functions and complex extensible user defined data types like JSON and Thrift This capability allows processing of complex and unstructured data sources such as text documents and log files ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 17 of 56 Hive allows user extensions via user defined functions written in Java Amazon EMR has made numerous improvements to Hive including direct integration with DynamoDB and Amazon S3 For example with Amazon EMR you can load table partitions automatically from Amazon S3 you can write data to tables in Amazon S3 without using temporary files and you can access resources in Amazon S3 such as scripts for custom map and/or reduce operations and additional libraries For more information see Apache Hive in the Amazon EMR Release Guide Pig Pig is an open source analytics package that runs on top of Hadoop Pig is operated by Pig Latin a SQL like language which allows users to structure summarize and query data As well as SQL like operations Pig Latin also adds firstclass support for map and reduce functions and complex extensible user defined data types This capability allows processing of complex and unstructured data sources such as text documents and log files Pig allows user extensions via user defined functions written in Java Amazon EMR has made numerous improvements to Pig including the ability to use multiple file systems (normally Pig can only access one remote file system) the ability to load customer JARs and scripts from Amazon S3 (such as “REGISTER s3://my bucket/piggybankjar”) and additional functionality for String and DateTime processing For more information see Apache Pig33 in the Amazon EMR Release Guide Spark Spark is an open source data analytics engine built on Hadoop with the fundamentals for inmemory MapReduce Spark provides additional speed for certain analytics and is the foundation for other power tools such as Shark (SQL driven data warehousing) Spark Streaming (streaming applications) GraphX (graph systems) and MLlib (machine learning) For more information see Apache Spark on Amazon EMR HBase HBase is an open source nonrelational distributed database modeled after Google's BigTable It was developed as part of Apache Software Foundation's Hadoop project and runs on top of Hadoop Distributed File System (HDFS) to provide BigTable like capabilities for Hadoop HBase provides you a fault tolerant efficient way of storing large quantities of sparse data using column ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 18 of 56 based compression and storage In addition HBase provides fast lookup of data because data is stored inmemory instead of on disk HBase is optimized for sequential write operations and it is highly efficient for batch inserts updates and deletes HBase works seamlessly with Hadoop sharing its file system and serving as a direct input and output to Hadoop jobs HBase also integrates with Apache Hive enabling SQL like queries over HBase tables joins with Hive based tables and support for Java Database Connectivity (JDBC) With Amazon EMR you can back up HBase to Amazon S3 (full or incremental manual or automated) and you can restore from a previously created backup For more information see Apache HBase in the Amazon EMR Release Guide Hunk Hunk was developed by Splunk to make machine data accessible usable and valuable to everyone With Hunk you can interactively explore analyze and visualize data stored in Amazon EMR and Amazon S3 harnessing Splunk analytics on Hadoop For more information see Amazon EMR with Hunk: Splunk Analytics for Hadoop and NoSQL Presto Presto is an open source distributed SQL query engine optimized for low latency adhoc analysis of data It supports the ANSI SQL standard including complex queries aggregations joins and window functions Presto can process data from multiple data sources including the Hadoop Distributed File System (HDFS) and Amazon S3 Kinesis Connector The Kinesis Connector enables EMR to directly read and query data from Kinesis Data Streams You can perform batch processing of Kinesis streams using existing Hadoop ecosystem tools such as Hive Pig MapRedu ce Hadoop Streaming and Cascading Some use cases enabled by this integration are: • Streaming Log Analysis: You can analyze streaming web logs to generate a list of top 10 error type every few minutes by region browser and access domains • Complex Data Processing Workflows: You can join Kinesis stream with data stored in Amazon S3 Dynamo DB tables and HDFS You can write queries that join clickstream data from Kinesis with advertising ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 19 of 56 campaign information stored in a DynamoDB table to identify the most effective categories of ads that are displayed on particular websites • Adhoc Queries: You can periodically load data from Kinesis into HDFS and make it available as a local Impala table for fast interactive analytic queries Other third party tools Amazon EMR also supports a variety of other popular applications and tools in the Hadoop ecosystem such as R (statistics) Mahout (machine learning) Ganglia (monitoring) Accumulo (secure NoSQL database) Hue (user interface to analyze Hadoop data) Sqoo p (relational database connector) HCatalog (table and storage management) and more Additionally you can install your own software on top of Amazon EMR to help solve your business needs AWS provides the ability to quickly move large amounts of data from Amazon S3 to HDFS from HDFS to Amazon S3 and between Amazon S3 buckets using Amazon EMR’s S3DistCp an extension of the open source tool DistCp that uses MapReduce to efficiently move large amounts of data You can optionally use the EMR File System (EMRFS) an implementation of HDFS which allows Amazon EMR clusters to store data on Amazon S3 You can enable Amazon S3 server side and client side encryption When you use EMRFS a metadata store is transparently built in DynamoDB to help manage the interactions with Amazon S3 and allows you to have multiple EMR clusters easily use the same EMRFS metadata and storage on Amazon S3 AntiPatterns Amazon EMR has the following antipatterns: • Small data sets – Amazon EMR is built for massive parallel processing; if your data set is small enough to run quickly on a single machine in a single thread the added overhead to map and reduce jobs may not be worth it for small data sets that can easily be processed in memory on a single system • ACID transaction requirements – While there are ways to achieve ACID (atomicity consistency isolation durability) or limited ACID on Hadoop using another database such as Amazon Relational Database ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 20 of 56 Service (Amazon RDS ) or a relational database running on Amazon EC2 may be a better option for workloads with stringent requirements AWS Glue AWS Glue is a fully managed extract transform and load (ETL) service that you can use to catalog your data clean it enrich it and move it reliably between data stores With AWS Glue you can significantly red uce the cost complexity and time spent creating ETL jobs AWS Glue is Serverless so there is no infrastructure to setup or manage You pay only for the resources consumed while your jobs are running Ideal Usage Patterns AWS Glue is designed to easily prepare data for extract transform and load (ETL) jobs Using AWS Glue gives you the following benefits: • AWS Glue can automatically crawl your data and generate code to execute or data transformations and loading processes • Integration with services like Amazon Athena Amazon EMR and Amazon Redshift • Serverless no infrastructure to provision or manage • AWS Glue generates ETL code that is customizable reusable and portable using familiar technology – Python and Spark Cost Model With AWS Glue you pay an hourly rate billed by the minute for crawler jobs (discovering data) and ETL jobs (processing and loading data) For the AWS Glue Data Catalog you pay a simple monthly fee for storing and accessing the metadata The fi rst million objects stored are free and the first million accesses are free If you provision a development endpoint to interactively develop your ETL code you pay an hourly rate billed per minute See AWS Glue Pricing for more details ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 21 of 56 Performance AWS Glue uses a scale out Apache Spark environment to load your data into its destination You can simply specify the number of Data Processing Units (DPUs ) that you want to allocate to your ETL job A n AWS Glue ETL job requires a minimum of 2 DPUs By default AWS Glue allocates 10 DPUs to each ETL job Additional DPUs can be added to increase the performance of your ETL job Multiple jobs can be triggered in parallel or sequentially by triggering them on a job completion event You can also trigger one or more AWS Glue jobs from an external source such as an AWS Lambda function Durability and Availability AWS Glue connects to the data source of your preference whether it is in an Amazon S3 file an Amazon RDS table or another set of data As a result all your data is stored and available as it pertains to that data stores durability characteristics The AWS Glue service provides status of each job and pushes al l notifications to Amazon CloudWatch events You can setup SNS notifications using CloudWatch actions to be informed of job failures or completions Scalability and Elasticity AWS Glue provides a managed ETL service that runs on a Serverless Apache Spark e nvironment This allows you to focus on your ETL job and not worry about configuring and managing the underlying compute resources AWS Glue works on top of the Apache Spark environment to provide a scale out execution environment for your data transformat ion jobs Interfaces AWS Glue provides a number of ways to populate metadata into the AWS Glue Data Catalog AWS Glue crawlers scan various data stores you own to automatically infer schemas and partition structure and populate the AWS Glue Data Catalog w ith corresponding table definitions and statistics You can also schedule crawlers to run periodically so that your metadata is always up todate and in sync with the underlying data Alternately you can add and update table details manually by using the AWS Glue Console or by calling the API You can also run Hive DDL statements via the Amazon Athena Console or a Hive client ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 22 of 56 on an Amazon EMR cluster Finally if you already have a persistent Apache Hive Metastore you can perform a bulk import of that met adata into the AWS Glue Data Catalog by using our import script AntiPatterns AWS Glue has the following antipatterns: • Streaming data – AWS Glue ETL is batch oriented and you can schedule your ETL jobs at a minimum of 5 min ute intervals While it can process micro batches it does not handle streaming data If your use case requires you to ETL data while you stream it in you can perfo rm the first leg of your ETL using Amazon Kinesis Amazon Kinesis Data Firehose or Amazon Kinesis Analytics Then store the data in either Amazon S3 or Amazon Redshift and trigger a n AWS Glue ETL job to pick up that dataset and continue applying additiona l transformations to that data • Multiple ETL engines – AWS Glue ETL jobs are PySpark based If your use case requires you to use an engine other than Apache Spark or if you want to run a heterogeneous set of jobs that run on a variety of engines like Hive Pig etc then AWS Data Pipeline or Amazon EMR would be a better choice • NoSQL Databases – Currently AWS Glue does not support data sources like NoSQL databases or Amazon DynamoDB Since NoSQL databases do not require a rigid schema like traditional rela tional databases most common ETL jobs would not apply Amazon Machine Learning Amazon Machine Learning (Amazon ML) is a service that makes it easy for anyone to use predictive analytics and machine learning technology Amazon ML provides visualization tools and wizards to guide you through the process of creating machine learning (ML) models without having to learn complex ML algorithms and technology After your models are ready Amazon ML makes it easy to obtain predictions for your application using API operations without having to implement custom prediction generation code or manage any infrastructure ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 23 of 56 Amazon ML can create ML models based on data stored in Amazon S3 Amazon Redshift or Amazon RDS Built in wizards guide you through the steps of interactively exploring your data to training the ML model to evaluating the model quality and adjusting outputs to align with business goals After a model is ready you can request predictions in either batches or using the lowlatency realtime API Ideal Usage Patterns Amazon ML is ideal for discovering patterns in your data and using these patterns to create ML models that can generate predictions on new unseen data points For example you can: • Enable applications to flag suspicious transactions – Build an ML model that predicts whether a new transaction is legitimate or fraudulent • Forecast product demand – Input historical order information to predict future order quantities • Personalize application content – Predict which items a user will be most interested in and retrieve these predictions from your application in realtime • Predict user activity – Analyze user behavior to customize your website and provide a better user experience • Listen to social media – Ingest and analyze social media feeds that potentially impact business decisions Cost Model With Amazon ML you pay only for what you use There are no minimum fees and no upfront commitments Amazon ML charges an hourly rate for the compute time used to build predictive models and then you pay for the number of predictions generated for your application For realtime predictions you also pay an hourly reserved capacity charge based on the amount of memory required to run your model The charge for data analysis model training and evaluation is based on the number of compute hours required to perform them and depends on the size of the input data the number of attributes within it and the number and types of transformations applied Data analysis and model building fees are priced at ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 24 of 56 $042 per hour Prediction fees are categorized as batch and realtime Batch predictions are $010 per 1000 predictions rounded up to the next 1000 while realtime predictions are $00001 per prediction rounded up to the nearest penny For realtime predictions there is also a reserved capacity charge of $0001 per hour for each 10 MB of memory provisioned for your model During model creation you specify the maximum memory size of each model to manage cost and control predictive performance You pay the reserved capacity charge only while your model is enabled for realtime predictions Charges for data stored in Amazon S3 Amazon RDS or Amazon Redshift are billed separately For more information see Amazon Machine Learning Pricing Performance The time it takes to create models or to request batch predictions from these models depends on the number of input data records the types and distribution of attributes within these records and the complexity of the data processing “recipe” that you specify Most realtime prediction requests return a response within 100 ms making them fast enough for interactive web mobile or desktop applications The exact time it takes for the realtime API to generate a prediction varies depending on the size of the input data record and the complexity of the data processing “recipe ” associated with the ML model that is generating the predictions Each ML model that is enabled for realtime predictions can be used to request up to 200 transactions per second by default and this number can be increased by contacting customer support You can monitor the number of predictions requested by your ML models by using CloudWatc h metrics Durability and Availability Amazon ML is designed for high availability There are no maintenance windows or scheduled downtimes The service runs in Amazon’ s proven high availability data centers with service stack replication configured across three facilities in each AWS Region to provide fault tolerance in the event of a server failure or Availability Zone outage Scalability and Elasticity By default you can process data sets that are up to 100 GB (this can be increased with a support ticket) in size to create ML models or to request batch ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 25 of 56 predictions For large volumes of batch predictions you can split your input data records into separate chunks to enable the processing of larger prediction data volume By default you can run up to five simultaneous jobs and by contacting customer service you can have this limit raised Because Amazon ML is a managed service there are no servers to provision and as a result you are able to scale as your application grows without having to over provision or pay for resources not being used Interfaces Creating a data source is as simple as adding your data to Amazon S3 or you can pull data directly from Amazon Redshift or MySQL databases managed by Amazon RDS After your data source is defined you can interact with Amazon ML using the console Programmatic access to Amazon ML is enabled by the AWS SDKs and Amazon ML API You can also create and manage Amazon ML entities using the AWS CLI available on Windows Mac and Linux/UNIX systems AntiPatterns Amazon ML has the following antipatterns: • Very large data sets – While Amazon ML can support up to a default 100 GB of data (this can be increased with a support ticket) terabyte scale ingestion of data is not currently supported Using Amazon EMR to run Spark’s Machine Learning Library (MLlib) is a common tool for such a use case • Unsupported learning tasks – Amazon ML can be used to create ML models that perform binary classification (choose one of two choices and provide a measure of confidence) multiclass classification (extend choices to beyond two options) or numeric regression (predict a number directly) Unsupported ML tasks such as sequence prediction or unsupervised clustering can be approached by using Amazon EMR to run Spark and MLlib Amazon DynamoDB Amazon DynamoDB is a fast fully managed NoSQL database service that makes it simple and cost effective to store and retrieve any amount of data and serve ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 26 of 56 any level of request traffic DynamoDB helps offload the administrative burden of operating and scaling a highly available distributed database cluster This storage alternative meets the latency and throughput requirements of highly demanding applications by providing single digit millisecond latency and predictable performance with seamless throughput and storage scalability DynamoDB stores structured data in tables indexed by primary key and allows lowlatency read and write access to items ranging from 1 byte up to 400 KB DynamoDB supports three data types (number string and binary) in both scalar and multi valued sets It supports document stores such as JSON XML or HTML in these data types Tables do not have a fixed schema so each data item can have a different number of attributes The primary key can either be a single attribute hash key or a composite hash range key DynamoDB offers both global and local secondary indexes provide additional flexibility for querying against attributes other than the primary key DynamoDB provides both eventually consistent reads (by default) and strongly consistent reads (optional) as well as implicit item level transactions for item put update delete conditional operations and increment/decrement DynamoDB is integrated with other services such as Amazon EMR Amazon Redshift AWS Data Pipeline and Amazon S3 for analytics data warehouse data import/export backup and archive Ideal Usage Patterns DynamoDB is ideal for existing or new applications that need a flexible NoSQL database with low read and write latencies and the ability to scale storage and throughput up or down as needed without code changes or downtime Common use cases include: • Mobile apps • Gaming • Digital ad serving • Live voting • Audience interaction for live events ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 27 of 56 • Sensor networks • Log ingestion • Access control for webbased content • Metadata storage for Amazon S3 objects • Ecommerce shopping carts • Web session management Many of these use cases require a highly available and scalable database because downtime or performance degradation has an immediate negative impact on an organization’s business Cost Model With DynamoDB you pay only for what you use and there is no minimum fee DynamoDB has three pricing components: provisioned throughput capacity (per hour) indexed data storage (per GB per month) data transfer in or out (per GB per month) New customers can start using DynamoDB for free as part of the AWS Free Usage Tier For more information see Amazon DynamoDB Pricing Performance SSDs and limiting indexing on attributes provides high throughput and low latency and drastically reduces the cost of read and write operations As the datasets grow predictable performance is required so that lowlatency for the workloads can be maintained This predictable performance can be achieved by defining the provisioned throughput capacity required for a given table Behind the scenes the service handles the provisioning of resources to achieve the requested throughput rate; you don’t need to think about instances hardware memory and other factors that can affect an application’s throughput rate Provisioned throughput capacity reservations are elastic and can be increased or decreased on demand Durability and Availability DynamoDB has built in fault tolerance that automatically and synchronously replicates data across three data centers in a region for high availability and to help protect data against individual machine or even facility failures ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 28 of 56 Amazon DynamoDB Streams captures all data activity that happens on your table and allows the ability to set up regional replication from one geographic region to another to provide even greater availability Scalability and Elasticity DynamoDB is both highly scalable and elastic There is no limit to the amount of data that you can store in a DynamoDB table and the service automatically allocates more storage as you store more data using the DynamoDB write API operations Data is automatically partitioned and repartitioned as needed while the use of SSDs provides predictable lowlatency response times at any scale The service is also elastic in that you can simply “dial up” or “dial down” the read and write capacity of a table as your needs change Interfaces DynamoDB provides a lowlevel REST API as well as higher level SDKs for Java ET and PHP that wrap the lowlevel REST API and provide some object relational mapping (ORM) functions These APIs provide both a management and data interface for DynamoDB The API currently offers operations that enable table management (creating listing deleting and obtaining metadata) and working with attributes (getting writing and deleting attributes; query using an index and full scan) While standard SQL isn’t available you can use the DynamoDB select operation to create SQL like queries that retrieve a set of attributes based on criteria that you provide You can also work with DynamoDB using the console AntiPatterns DynamoDB has the following antipatterns: • Prewritten application tied to a traditional relational database – If you are attempting to port an existing application to the AWS cloud and need to continue using a relational database you can use either Amazon RDS (Amazon Aurora MySQL PostgreSQL Oracle or SQL Server) or one of the many preconfigured Amazon EC2 database AMIs You can also install your choice of database software on an EC2 instance that you manage • Joins or complex transactions – While many solutions are able to leverage DynamoDB to support their users it’s possible that your ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 29 of 56 application may require joins complex transactions and other relational infrastructure provided by traditional database platforms If this is the case you may want to explore Amazon Redshift Amazon RDS or Amazon EC2 with a selfmanaged database • Binary large objects (BLOB) data – If you plan on storing large (greater than 400 KB) BLOB data such as digital video images or music you’ll want to consider Amazon S3 However DynamoDB can be used in this scenario for keeping track of metadata (eg item name size date created owner location etc) about your binary objects • Large data with low I/O rate –DynamoDB uses SSD drives and is optimized for workloads with a high I/O rate per GB stored If you plan to store very large amounts of data that are infrequently accessed other storage options may be a better choice such as Amazon S3 Amazon Redshift Amazon Redshift is a fast fully managed petabyte scale data warehouse service that makes it simple and costeffective to analyze all your data efficiently using your existing business intelligence tools It is optimized for data sets ranging from a few hundred gigabytes to a petabyte or more and is designed to cost less than a tenth of the cost of most traditional data warehousing solutions Amazon Redshift delivers fast query and I/O performance for virtually any size dataset by using columnar storage technology while parallelizing and distributing queries across multiple nodes It automates most of the common administrative tasks associated with provisioning configuring monitoring backing up and securing a data warehouse making it easy and inexpensive to manage and maintain This automation allows you to build petabyte scale data warehouses in minutes instead of weeks or months taken by traditional on premises implementations ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 30 of 56 Amazon Redshift Spectrum is a feature that enables you to run queries against exabyte s of unstructured dat a in Amazon S3 with no loading or ETL required When you issue a query it goes to the Amazon Redshift SQL endpoint which generates and optimizes a query plan Amazon Redshift determines what data is local and what is in Amazon S3 generates a plan to mi nimize the amount of Amazon S3 data that needs to be read and then requests Redshift Spectrum workers out of a shared resource pool to read and process the data from Amazon S3 Ideal Usage Patterns Amazon Redshift is ideal for online analytical processing (OLAP) using your existing business intelligence tools Organizations are using Amazon Redshift to: • Analyze global sales data for multiple products • Store historical stock trade data • Analyze ad impressions and clicks • Aggregate gaming data • Analyze social trends • Measure clinical quality operation efficiency and financial performance in health care Cost Model An Amazon Redshift data warehouse cluster requires no long term commitments or upfront costs This frees you from the capital expense and complexity of planning and purchasing data warehouse capacity ahead of your needs Charges are based on the size and number of nodes of your cluster There is no additional charge for backup storage up to 100% of your provisioned storage For example if you have an active cluster with 2 XL nodes for a total of 4 TB of storage AWS provides up to 4 TB of backup storage on Amazon S3 at no additional charge Backup storage beyond the provisioned storage size and backups stored after your cluster is terminated are billed at standard Amazon S3 rates There is no data transfer charge for communication between Amazon S3 and Amazon Redshift For more information see Amazon Redshift pricing ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 31 of 56 Performance Amazon Redshift uses a variety of innovations to obtain very high performance on data sets ranging in size from hundreds of gigabytes to a petabyte or more It uses columnar storage data compression and zone maps to reduce the amount of I/O needed to perform queries Amazon Redshift has a massively parallel processing (MPP) architecture parallelizing and distributing SQL operations to take advantage of all available resources The underlying hardware is designed for high performance data processing using local attached storage to maximize throughput between the CPUs and drives and a 10 GigE mesh network to maximize throughput between nodes Performance can be tuned based on your data warehousing needs: AWS offers Dense Compute (DC) with SSD drives as well as Dense Storage (DS) options Durability and Availability Amazon Redshift automatically detects and replaces a failed node in your data warehouse cluster The data warehouse cluster is read only until a replacement node is provisioned and added to the DB which typically only takes a few minutes Amazon Redshift makes your replacement node available immediately and streams your most frequently accessed data from Amazon S3 first to allow you to resume querying your data as quickly as possible Additionally your data warehouse cluster remains available in the event of a drive failure; because Amazon Redshift mirrors your data across the cluster it uses the data from another node to rebuild failed drives Amazon Redshift clusters reside within one Availability Zone but if you wish to have a multi AZ set up for Amazon Redshift you can set up a mirror and then selfmanage replication and failover Scalability and Elasticity With a few clicks in the console or an API call you can easily change the number or type of nodes in your data warehouse as your performance or capacity needs change Amazon Redshift enables you to start with a single 160 GB node and scale up to a petabyte or more of compressed user data using many nodes For more information see Clusters and Nodes in Amazon Redshift in the Amazon Redshift Management Guide ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 32 of 56 While resizing Amazon Redshift places your existing cluster into read only mode provisions a new cluster of your chosen size and then copies data from your old cluster to your new one in parallel During this process you pay only for the active Amazon Redshift cluster You can continue running queries against your old cluster while the new one is being provisioned After your data has been copied to your new cluster Amazon Redshift automatically redirects queries to your new cluster and removes the old cluster Interfaces Amazon Redshift has custom JDBC and ODBC drivers that you can download from the Connect Client tab of the console allowing you to use a wide range of familiar SQL clients You can also use standard PostgreSQL JDBC and ODBC drivers For more information about Amazon Redshift drivers see Amazon Redshift and PostgreSQL There are numerous examples of validated integrations with many popular BI and ETL vendors Loads and unloads are attempted in parallel into each compute node to maximize the rate at which you can ingest data into your data warehouse cluster as well as to and from Amazon S3 and DynamoDB You can easily load streaming data into Amazon Redshift using Amazon Kinesis Data Firehose enabling near realtime analytics with existing business intelligence tools and dashboards you’re already using today Metrics for compute utilization memory utilization storage utilization and read/write traffic to your Amazon Redshift data warehouse cluster are available free of charge via the console or CloudWatch API operations AntiPatterns Amazon Redshift has the following antipatterns: • Small data sets – Amazon Redshift is built for parallel processing across a cluster If your data set is less than a hundred gigabytes you are not going to get all the benefits that Amazon Redshift has to offer and Amazon RDS may be a better solution • Online transaction processing (OLTP) – Amazon Redshift is designed for data warehouse workloads producing extremely fast and inexpensive analytic capabilities If you require a fast transactional system you may want to choose a traditional relational database system built on Amazon RDS or a NoSQL database offering such as DynamoDB ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 33 of 56 • Unstructured data – Data in Amazon Redshift must be structured by a defined schema rather than supporting arbitrary schema structure for each row If your data is unstructured you can perform extract transform and load (ETL) on Amazon EMR to get the data ready for loading into Amazon Redshift • BLOB data – If you plan on storing large binary files (such as digital video images or music) you may want to consider storing the data in Amazon S3 and referencing its location in Amazon Redshift In this scenario Amazon Redshift keeps track of metadata (such as item name size date created owner location and so on) about your binary objects but the large objects themselves are stored in Amazon S3 Amazon Elasticsearch Service Amazon Elasticsearch Service (Amazon ES) makes it easy to deploy operate and scale Elasticsearch for log analytics full text search application monitoring and more Amazon ES is a fully manag ed service that delivers Elasticsearch’s easy touse APIs and real time capabilities along with the availability scalability and security required by production workloads The service offers built in integrations with Kibana Logstash and AWS services including Amazon Kinesis Data Firehose AWS Lambda and Amazon CloudWatch so that you can go from raw data to actionable insights quickly It’s easy to get started with Amazon ES You can set up and configure your Amazon ES domain in minutes from the AWS Management Console Amazon ES provisions all the resources for your domain and launches it The service automatically detects and replaces failed Elasticsearch nodes reducing the overhead associated with self managed infrastructure and Elasticsearch software Amazon ES allows you to easily scale your cluster via a single API call or a few clicks in the console With Amazon ES you get direct access to the Elasticsearch open source API so th at code and applications you’re already using with your existing Elasticsearch environments will work seamlessly Ideal Usage Pattern Amazon Elasticsearch Service is ideal for querying and searching large amounts of data Organizations can use Amazon ES to do the following: ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 34 of 56 • Analyze activity logs eg logs for customer facing applications or websites • Analyze CloudWatch logs with Elasticsearch • Analyze product usage data coming from various services and systems • Analyze social media sentiments CRM data and find trends for your brand and products • Analyze data stream updates from other AWS services eg Amazon Kinesis Data Streams and Amazon DynamoDB • Provide customer s a rich search and navigation experience • Usage monitoring for mobile applications Cost Model With Amazon Elasticsearch Service you pay only for what you use There are no minimum fees or upfront commitments You are charged for Amazon ES instance hour s Amazon EBS storage (if you choose this option) and standard data transfer fees You can get started with our free tier which provides free usage of up to 750 hours per month of a single AZ t2microelasticsearch or t2smallelasticsearch instance and 10 GB per month of optional Amazon EBS storage (Magnetic or General Purpose) Amazon ES allows you to add data durability through automated and manual snapshots of your cluster Amazon ES provides storage space for automated snapshots free of charge for ea ch Amazon Elasticsearch domain Automated snapshots are retained for a period of 14 days Manual snapshots are charged according to Amazon S3 storage rates Data transfer for using the snapshots is free of charge For more information see Amazon Elasticsearch Service Pricing Performance Performance of Amazon ES depends on multiple factors including instance type workload index number of shards used read replicas storage ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 35 of 56 configurations –instance storage or EBS storage (general purpose SSD) Indexes are made up of shards of data which can be distributed on different instances in multiple Availability Zones Read replica of the shards are maintained by Amazon ES in a different Availability Zone if zone awareness is checked Amazon ES can use either the fast SSD instance storage for stor ing indexes or multiple EBS volumes A search engine makes heavy use of storage devices and making disks faster will result in faster query and search performance Durability and Availability You can configure your Amazon ES domains for high availability by enabling the Zone Awareness option either at domain creation time or by modifying a live domain When Zone Awareness is enabled Amazon ES distributes the instances supporting the domain across two different Availability Zones Then if you enable repli cas in Elasticsearch the instances are automatically distributed in such a way as to deliver cross zone replication You can build data durability for your Amazon ES domain through automated and manual snapshots You can use snapshots to recover your dom ain with preloaded data or to create a new domain with preloaded data Snapshots are stored in Amazon S3 which is a secure durable highly scalable object storage By default Amazon ES automatically creates daily snapshots of each domain In addition y ou can use the Amazon ES snapshot APIs to create additional manual snapshots The manual snapshots are stored in Amazon S3 Manual snapshots can be used for cross region disaster recovery and to provide additional durability Scalability and Elasticity You can add or remove instances and easily modify Amazon EBS volumes to accommodate data growth You can write a few lines of code that will monitor the state of your domain through Amazon CloudWatch metrics and call the Amazon Elasticsearch Service API t o scale your domain up or down based on thresholds you set The service will execute the scaling without any downtime Amazon Elasticsearch Service supports 1 EBS volume (max size of 15 TB) per instance associated with a domain With the default maximum o f 20 data nodes ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 36 of 56 allowed per Amazon ES domain you can allocate about 30 TB of EBS storage to a single domain You can request a service limit increase up to 100 instances per domain by creating a case with the AWS Support Center With 100 instances you can allocate about 150 TB of EBS storage to a single domain Interfaces Amazon ES supports many of the commonly used Elasticsearch APIs so code applications and popular tools that you're already using with your current Elasticsearch environments will work seamlessly For a full list of supported Elasticsearch operations see our documentation The AWS CLI API or the AWS Management Console can be used for creating and managing your domains as well Amazon ES supports integration with several AWS services including streaming data from S3 buckets Amazon Kinesis Data S treams and DynamoDB Streams Both integrations use a Lambda function as an event handler in the cloud that responds to new data in Amazon S3 and Amazon Kinesis Data Streams by processing it and streaming the data to your Amazon ES domain Amazon ES also integrates with Amazon CloudWatch for monitoring Amazon ES domain metrics and CloudTrail for auditing configuration API calls to Amazon ES domains Amazon ES includes built in integration with Kibana an open source analytics and visualization platform and supports integration with Logstash an open source data pipeline that helps you process logs and other event data You can set up your Amazon ES domain as the backend store for all logs coming through your Logstash implementation to easily ingest structured and unstructured data from a variety of sources AntiPatterns • Online transaction processing (OLTP) Amazon ES is a real time distributed search and analytics engine There is no support for transactions or processing on data manipulation If your requirement is for a fast transactional system then a traditional relational database ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 37 of 56 system built on Amazon RDS or a NoSQL databa se offering functionality such as DynamoDB is a better choice • Ad hoc data querying – While Amazon ES takes care of the operational overhead of building a highly scalable Elasticsearch cluster if running Ad hoc queries or oneoff queries against your da ta set is your usecase Amazon Athena is a better choice Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL without provisioning servers Amazon QuickSight Amazon QuickSight is a very fast easy touse cloud powered business analytics service that makes it easy for all employees within an organization to build visualizations perform ad hoc analysis and quickly get business insights from their d ata anytime on any device It can connect to a wide variety of data sources including flat files eg CSV and Excel access on premise databases including SQL Server MySQL and PostgreSQL AWS resources like Amazon RDS databases Amazon Redshift Amazo n Athena and Amazon S3 Amazon QuickSight enables organizations to scale their business analytics capabilities to hundreds of thousands of users and delivers fast and responsive query performance by using a robust in memory engine (SPICE) Amazon QuickSig ht is built with "SPICE" – a Super fast Parallel In memory Calculation Engine Built from the ground up for the cloud SPICE uses a combination of columnar storage in memory technologies enabled through the latest hardware innovations and machine code g eneration to run interactive queries on large datasets and get rapid responses SPICE supports rich calculations to help you derive valuable insights from your analysis without worrying about provisioning or managing infrastructure Data in SPICE is persis ted until it is explicitly deleted by the user SPICE also automatically replicates data for high availability and enables Amazon QuickSight to scale to hundreds of thousands of users who can all simultaneously perform fast interactive analysis across a wi de variety of AWS data sources Ideal Usage Patterns Amazon QuickSight is an ideal Business Intelligence tool allowing end users to create visualizations that provide insight into their data to help them make better business decisions Amazon QuickSight can be used to do the following: ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 38 of 56 • Quick interactive ad hoc exploration and optimized visualization of data • Create and share dashboards and KPI’s to provide insight into your data • Create Stories which are guided tours through specific views of an analysis and allow you to share insights and collaborate with others They are used to convey key points a thought process or the evolution of an analysis for collaboration • Analyze and visualize data coming from logs and stored in S3 • Analyze and visual ize data from on premise databases like SQL Server Oracle PostGreSQL and MySQL • Analyze and visualize data in various AWS resources eg Amazon RDS databases Amazon Redshift Amazon Athena and Amazon S3 • Analyze and visualize data in software as a se rvice ( SaaS) applications like Salesforce • Analyze and visualize data in data sources that can be connected to using JDBC/ODBC connection Cost Model Amazon QuickS ight has two different editions for pricing; standard edition and enterprise edition For an annual subscription it is $9/user/month for standard edition and $18/user/month for enterprise edition both with 10 GB of SPICE capacity included You can get addition al SPICE capacity for $25/GB/month for standard edition and $38/GB/month for enterprise edition We also have month to month option for both the editions For standard edition it is $12/GB/month and enterprise edition is $24/GB/month Additional informat ion on pricing can be found at Amazon QuickSight Pricing Both editions offer a full set of features for creating and sharing data visualizations Enterprise edition also offers encryption at rest and Microsoft Activ e Directory (AD) integration In Enterprise edition you select a Microsoft AD directory in AWS Directory Service You use that active directory to identify and manage your Amazon QuickSight users and administrators ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 39 of 56 Performance Amazon QuickSight is built with ‘SPICE’ a Super fast Parallel and In memory Calculation Engine Built from the ground up for the cloud SPICE uses a combination of columnar storage in memory technologies enabled through the latest hardware innovations and machine code generation to run interactive queries on large datasets and get rapid responses Durability and Availability SPICE automatically replicates data for high availability and enables Amazon QuickSight to scale to hundreds of thousands of users who can all simultaneously perform fast interactive analysis across a wide variety of AWS data sources Scalability and Elasticity Amazon QuickSight is a fully managed service and it internally takes care of scaling to meet the demands of your end users With Amazon Qui ckSight you don’t need to worry about scale You can seamlessly grow your data from a few hundred megabytes to many terabytes of data without managing any infrastructure Interfaces Amazon QuickSight can connect to a wide variety of data sources including flat files (CSV TSV CLF ELF) connect to on premises databases like SQL Server MySQL and PostgreSQL and AWS data sources including Amazon RDS Amazon Aurora Amazon Redshift Amazon Athena and Amazon S3 and SaaS applications like Salesforce You can also export analyzes from a visual to a file with CSV format You can share an analysis dashboard or story using the share icon from the Amazon QuickSight service interface You will be able to select the recipients (email address username or group name) permission levels and other options before sharing the content with others AntiPatterns • Highly formatted canned Reports – Amazon QuickSight is much more suited for ad hoc query analysis and visualization of da ta For highly formatted reports eg formatted financial statements consider using a different tool • ETL While Amazon QuickSight can perform some transformations it is not a full fledged ETL tool AWS offers AWS Glue which is a fully ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 40 of 56 managed extract t ransform and load (ETL) service that makes it easy for customers to prepare and load their data for analytics Amazon EC2 Amazon EC2 with instances acting as AWS virtual machines provides an ideal platform for operating your own selfmanaged big data analytics applications on AWS infrastructure Almost any software you can install on Linux or Windows virtualized environments can be run on Amazon EC2 and you can use the payas yougo pricing model What you don’t get are the application level managed services that come with the other services mentioned in this whitepaper There are many options for selfmanaged big data analytics; here are some examples: • A NoSQL offering such as MongoDB • A data warehouse or columnar store like Vertica • A Hadoop cluster • An Apache Storm cluster • An Apache Kafka environment Ideal Usage Patterns • Specialized Environment – When running a custom application a variation of a standard Hadoop set or an application not covered by one of our other offerings Amazon EC2 provides the flexibility and scalability to meet your computing needs • Compliance Requirements – Certain compliance requirements may require you to run applications yourself on Amazon EC2 instead of using a managed service offering Cost Model Amazon EC2 has a variety of instance types in a number of instance families (standard high CPU high memory high I/O etc) and different pricing options (OnDemand Reserved and Spot) Depending on your application requirements you may want to use additional services along with Amazon EC2 such as Amazon Elastic Block Store (Amazon EBS) for directly attached persistent storage or Amazon S3 as a durable object store; each comes with their own pricing model If you do run your big data application on Amazon EC2 you ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 41 of 56 are responsible for any license fees just as you would be in your own data center The AWS Marketplace offers many different third party big data software packages preconfigured to launch with a simple click of a button Performance Performance in Amazon EC2 is driven by the instance type that you choose for your big data platform Each instance type has a different amount of CPU RAM storage IOPs and networking capability so that you can pick the right performance level for your application requirements Durability and Availability Critical applications should be run in a cluster across multiple Availability Zones within an AWS Region so that any instance or data center failure does not affect application users For non uptime critical applications you can back up your application to Amazon S3 and restore to any Availability Zone in the region if an instance or zone failure occurs Other options exist depending on which application you are running and the requirements such as mirroring your application Scalability and Elasticity Auto Scaling is a service that allows you to automatically scale your Amazon EC2 capacity up or down according to conditions that you define With Auto Scaling you can ensure that the number of EC2 instan ces you’re using scales up seamlessly during demand spikes to maintain performance and scales down automatically during demand lulls to minimize costs Auto Scaling is particularly well suited for applications that experience hourly daily or weekly variability in usage Auto Scaling is enabled by CloudWatch and available at no additional charge beyond CloudWatch fees Interfaces Amazon EC2 can be managed programmatically via API SDK or the console Metrics for compute utilization memory utilization storage utilization network consumption and read/write traffic to your instances are free of charge using the console or CloudWatch API operations The interfaces for your big data analytics software that you run on top of Amazon EC2 varies based on the characteristics of the software you choose ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 42 of 56 AntiPatterns Amazon EC2 has the following antipatterns: • Managed Service – If your requirement is a managed service offering where you abstract the infrastructure layer and administration from the big data analytics then this “do it yourself” model of managing your own analytics software on Amazon EC2 may not be the correct choice • Lack of Expertise or Resources – If your organization does not have or does not want to expend the resources or expertise to install and manage a high availability installation for the system in question you should consider using the AWS equivalent such as Amazon EMR DynamoDB Amazon Kinesis Data Streams or Amazon Redshift Amazon Athena Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL Athena is serverless so there is no infrastructure to setup or manage and you can start analyzing data immediately You don’t need to load your data into Athena as it works directly with data stored in S3 Just log into the Athena Console define your table schema and start querying Amazo n Athena uses Presto with full ANSI SQL support and works with a variety of standard data formats including CSV JSON ORC Apache Parquet and Apache Avro Ideal Usage Patterns • Interactive ad hoc querying for web logs – Athena is a good tool for interactive onetime SQL queries against data on Amazon S3 For example you could use Athena to run a query on web and application logs to troubleshoot a performance issue You simply define a table for your data and start queryi ng using standard SQL Athena integrates with Amazon QuickSight for easy visualization • To query staging data before loading into Redshift – You can stage your raw data in S3 before processing and loading it into Redshift and then use Athena to query tha t data • Send AWS Service logs to S3 for Analysis with Athena – CloudTrail Cloudfront ELB/ALB and VPC flow logs can be analyzed ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 43 of 56 with Athena AWS CloudTrail logs include details about any API calls made to your AWS services including from the console CloudFront logs can be used to explore users’ surfing patterns across web properties served by CloudFront Querying ELB/ALB logs allows you to s ee the source of traffic latency and bytes transferred to and from Elastic Load Balancing instances and backend applications VPC flow logs capture information about the IP traffic going to and from network interfaces in VPCs in the Amazon VPC service The logs allow you to investigate network traffic patterns and identify threats and risks across your VPC estate • Building Interactive Analytical Solutions with notebook based solutions eg RStudio Jupyter or Zeppelin Data scientists and Analysts are often concerned about managing the infrastructure behind big data platforms while running notebook based solutions such as RStudio Jupyter and Zeppelin Amazon Athena makes it easy to analyze data using standard SQL without the need to manage infrastructure Integrating these notebook based solutions with Amazon Athena gives data scientists a powerful platform for building interactive analytical solutions Cost Model Amazon Athena has simple payasyougo pricing with no upfront costs or minimum fees and you’ll only pay for the resources you consume It is priced per query $5 per TB of data scanned and charges based on the amount of data scanned by the query You can save from 30% to 90% on your per query costs and get better performance by compressing partitioning and converting your data into colum nar formats Converting data to the columnar format allows Athena to read only the columns it needs to process the query You are charged for the number of bytes scanned by Amazon Athena rounded up to the nearest megabyte with a 10 MB minimum per query There are no charges for Data Definition Language (DDL) statements like CREATE/ALTER/DROP TABLE statements for managing partitions or failed queries Cancelled queries are charged based on the amount of data scanned ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 44 of 56 Performance You can improve the perfo rmance of your query by compressing partitioning and converting your data into columnar formats Amazon Athena supports open source columnar data formats such as Apache Parquet and Apache ORC Converting your data into a compressed columnar format lowers your cost and improves query performance by enabling Athena to scan less data from S3 when executing your query Durability and Availability Amazon Athena is highly available and executes queries using compute resources across multiple facilities automatically routing queries appropriately if a particular facility is unreachable Athena uses Amazon S3 as its underlying data store making your data highly available and durable Amazon S3 provides durable infrastructure to store important data and is designed for durability of 99999999999% of objects Your d ata is redundantly stored across multiple facilities and multiple devices in each facility Scalability and Elasticity Athena is serverless so there is no infrastructure to setup or manage and you can start analyz ing data immediately Since it is serverl ess it can scale automatically as needed Security Authorization and Encryption Amazon Athena allows you to control access to your data by using AWS Identity and Access Management (IAM) policies Access Control Lists (ACLs) and Amazon S3 bucket policies With IAM policies you can grant IAM users fine grained control to your S3 buckets By controlling access to data in S3 you can restrict users from querying it using Athena You can query data that’s been protected by : • Server side encryption with an Ama zon S3 managed key • Server side encryption with an AWS KMS managed key • Client side encryption with an AWS KMS managed key Amazon Athena also can directly integrate with AWS Key Management System (KMS ) to encrypt your result sets if desired ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 45 of 56 Interfaces Quer ying can be done by using the Athena Console Athena also supports CLI API via SDK and JDBC Athena also integrates with Amazon QuickSight for creating visualizations based on the Athena queries AntiPatterns Amazon Athena has the following antipatterns: • Enterprise Reporting and Business Intelligence Workloads – Amazon Redshift is a better tool for Enterprise Reporting and Business Intelligence Workloads involving iceberg queries or cached data at the nodes Data warehouses pull data from ma ny sources format and organize it store it and support complex high speed queries that produce business reports The query engine in Amazon Redshift has been optimized to perform especially well on data warehouse workloads • ETL Workloads – You should use Amazon EMR/ Amazon Glue if you are looking for an ETL tool to process extremely large datasets and analyze them with the latest big data processing frameworks such as Spark Hadoop Presto or Hbase • RDBMS – Athena is not a relation al/transactional dat abase It is not meant to be a replacement for SQL engines like M ySQL Solving Big Data Problems on AWS In this whitepaper we have examined some tools available on AWS for big data analytics This paper provides a good reference point when starting to design your big data applications However there are additional aspects you should consider when selecting the right tools for your specific use case In general each analytical workload has certain characteristics and requirements that dictate which tool to use such as: • How quickly do you need analytic results : in real time in seconds or is an hour a more appropriate time frame? • How much value will these analytics provide your organization and what budget constraints exist? • How large is the data and what is its growth rate? • How is the data structured? ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 46 of 56 • What integration capabilities do the producers and consumers have? • How much latency is acceptable between the producers and consumers? • What is the cost of downtime or how available and durable does the solution need to be? • Is the analytic workload consistent or elastic? Each one of these questions helps guide you to the right tool In some cases you can simply map your big data analytics workload into one of the services based on a set of requirements However in most realworld big data analytic workloads there are many different and sometimes conflicting characteristics and requirements on the same data set For example some result sets may have realtime requirements as a user interacts with a system while other analytics could be batched and run on a daily basis These different requirements over the same data set should be decoupled and solved by using more than one tool If you try to solve both of these examples using the same toolset you end up either over provisioning or therefore overpaying for unnecessary response time or you have a solution that does not respond fast enough to your users in real time Matching the best suited tool to each analytical problem results in the most cost effective use of your compute and storage resources Big data doesn’t need to mean “big costs” So when designing your applications it’s important to make sure that your design is cost efficient If it’s not relative to the alternatives then it’s probably not the right design Another common misconception is that using multiple tool sets to solve a big data problem is more expensive or harder to manage than using one big tool If you take the same example of two different requirements on the same data set the realtime request may be low on CPU but high on I/O while the slower processing request may be very compute intensive Decoupling can end up being much less expensive and easier to manage because you can build each tool to exact specification s and not overprovision With the AWS payasyougo model this equates to a much better value because you could run the batch analytics in just one hour and therefore only pay for the compute resources for that hour Also you may find this approach easier to manage rather than leveraging a single system that tries to meet all of the requirements Solving for different requirements with one tool results in ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 47 of 56 attempting to fit a square peg (real time requests) into a round hole (a large data warehouse) The AWS platform makes it easy to decouple your architecture by having different tools analyze the same data set AWS services have built in integration so that moving a subset of data from one tool to another can be done very easily and quickl y using parallelization Let’s put this into practice by exploring a few real world big data analytics problem scenarios and walk ing through an AWS architectural solution Example 1: Queries against an Amazon S3 Data Lake Data lakes are an increasingly popular way to store and analyze both structured and unstructured data If you use an Amazon S3 data lake AWS Glue can make all your data immediately available for analytics without moving the data AWS Glue crawlers can sca n your data lake and keep the AWS Glue Data Catalog in sync with the underlying data You can then directly query your data lake with Amazon Athena and Amazon Redshift Spectrum You can also use the AWS Glue Data Catalog as your external Apache Hive Metast ore for big data applications running on Amazon EMR ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 48 of 56 1 An AWS Glue crawler connects to a data store progresses through a prioritized list of classifiers to extract the schema of your data and other statistics and then populates the AWS Glue Data Catalog with this metadata Crawlers can run periodically to de tect the availability of new data as well as changes to existing data including table definition changes Crawlers automatically add new tables new partitions to existing table and new versions of table definitions You can customize AWS Glue crawlers t o classify your own file types 2 The AWS Glue Data Catalog is a central repository to store structural and operational metadata for all your data assets For a given data set you can store its table definition physical location add business relevant attributes as well as track how this data has changed over time The AWS Glue Data Catalog is Apache Hive Metastore compatible and is a drop in replacement for the Apache Hive Metastore for Big Data applications running on Amazon EMR For more information on setting up your EMR cluster to use AWS Glue Data Catalog as an Apache Hive Metastore click here 3 The AWS Glue Data Catalog also provides out ofbox integration with Amazon Athena Amazon EMR and Amazon Redshift Spectrum Once you add your table definitions to the AWS Glue Data Catalog they are available for ETL and also readily available for querying in Amazon ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 49 of 56 Athena Amazon EMR and Amazon Redshift Spectrum so that you can have a common view of your data between these services 4 Using a BI tool like Amazon QuickSight enables you to easily build visualizations perform ad hoc analysis and quickly get business insights from your data Amazon QuickSight supports data so urces like: Amazon Athena Amazon Redshift Spectrum Amazon S3 and many others see here: Supported Data Sources Example 2: Capturing and Analyzing Sensor Data An international air conditioner manufacturer has many large air conditioners that it sells to various commercial and industrial companies Not only do they sell the air conditioner units but to better position themselves against their competitors they also offer addon services where you can see realtime dashboards in a mobile app or a web browser Each unit sends its sensor information for processing and analysis This data is used by the manufacturer and its customers With this capability the manufacturer can visualize the dataset and spot trends Currently they have a few thousand prepurchased air conditioning (A/C) units with this capability They expect to deliver these to customers in the next couple of months and are hopi ng that in time thousands of units throughout the world will be using this platform If successful they would like to expand this offering to their consumer line as well with a much larger volume and a greater market share The solution needs to be able to handle massive amounts of data and scale as they grow their business without interruption How should you design such a system? First break it up into two work streams both originating from the same data: • A/C unit’s current information with near real time requirements and a large number of customers consuming this information • All historical information on the A/C units to run trending and analytics for internal use The data flow architecture in the following illustration sh ows how to solve this big data problem ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 50 of 56 Capturing and Analyzing Sensor Data 1 The process begins with each A/C unit providing a constant data stream to Amazon Kinesis Data Streams This provides an elastic and durable interface the units can talk to that can be scaled seamlessly as more and more A/C units are sold and brought online 2 Using the Amazon Kinesis Data Streams provided tools such as the Kinesis Client Library or SDK a simple application is built on Amazon EC2 to read data as it comes into Amazon Kinesis Data Streams analyze it and determine if the data warrants an update to the realtime dashboard It looks for changes in system operation temperature fluctuations and any errors that the units encounter 3 This data flow needs to occur in near real time so that customers and maintenance teams can be alerted as quickly as possible if there is an issue with the unit The data in the dashboard does have some aggregated trend information but it is mainly the current state as well as any system errors So the data needed to populate the dashboard is relatively small Additionally there will be lots of potential access to this data from the following sources: o Customers checking on their system via a mobile device or browser o Maintenance teams checking the status of its fleet o Data and intelligence algorithms and analytics in the reporting platform spot trends that can be then sent out as alerts such as if ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 51 of 56 the A/C fan has been running unusually long with the building temperatur e not going down DynamoDB was chosen to store this near real time data set because it is both highly available and scalable; throughput to this data can be easily scaled up or down to meet the needs of its consumers as the platform is adopted and usage grows 4 The reporting dashboard is a custom web application that is built on top of this data set and run on Amazon EC2 It provides content based on the system status and trends as well as alerting customers and maintenance crews of any issues that may come up with the unit 5 The customer accesses the data from a mobile device or a web browser to get the current status of the system and visualize historical trends The data flow (steps 25) that was just described is built for near realtime reporting of information to human consumers It is built and designed for low latency and can scale very quickly to meet demand The data flow (steps 69) that is depicted in the lower part of the diagram does not have such stringent speed and latency requirements This allows the architect to design a different solution stack that can hold larger amounts of data at a much smaller cost per byte of information and choose less expensive compute and storage resources 6 To read from the Amazon Kinesis stream there is a separate Amazon Kinesis enabled application that probably runs on a smaller EC2 instance that scales at a slower rate While this application is going to analyze the same data set as the upper data flow the ultimate purpose of this data is to store it for long term record and to host the data set in a data warehouse This data set ends up being all data sent from the systems and allows a much broader set of analytics to be performed without the near realtime requirements 7 The data is transformed by the Amazon Kinesis enabled application into a format that is suitable for long term storage for loading into its data warehouse and storing on Amazon S3 The data on Amazon S3 not only serves as a parallel ingestion point to Amazon Redshift but is durable storage that will hold all data that ever runs through this system; it can be the single source of truth It can be used to load other analytical tools if additional requirements arise Amazon S3 also comes with native integration with Amazon Glacier if any data needs to be cycled into long term lowcost storage ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 52 of 56 8 Amazon Redshift is again used as the data warehouse for the larger data set It can scale easily when the data set grows larger by adding another node to the cluster 9 For visualizing the analytics one of the many partner visualization platforms can be used via the OBDC/JDBC connection to Amazon Redshift This is where the reports graphs and ad hoc analytics can be performed on the data set to find certain variables and trends that can lead to A/C units underperforming or breaking This architecture can start off small and grow as needed Additionally by decoupling the two different work streams from each other they can grow at their own rate without upfront commitment allowing the manufacturer to assess the viability of this new offering without a large initial investment You could easily imag ine further additions such as adding Amazon ML to predict how long an A/C unit will last and preemptively send ing out maintenance teams based on its prediction algorithms to give their customers the best possible service and experience This level of service would be a differentiator to the competition and lead to increased future sales Example 3: Sentiment Analysis of Social Media A large toy maker has been growing very quickly and expanding their product line After each new toy release the company wants to understand how consumers are enjoying and using their products Additionally the company wants to ensure that their consumers are having a good experience with their products As the toy ecosystem grows the company wants to ensure that their products are still relevant to their customers and that they can plan future roadmaps items based on customer feedback The company wants to capture the following insights from social media: • Understand how consumers are using their products • Ensure customer satisfaction • Plan future roadmaps Capturing the data from various social networks is relatively easy but the challenge is building the intelligence programmatically After the data is ingested the company wants to be able to analyze and classify the data in a cost effective and programmatic way To do this you can use the architecture in the following illustration ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 53 of 56 Sentiment Analysis of Social Media The first step is to decide which social media sites to listen to Then create an application on Amazon EC2 that polls those sites using their corresponding APIs Next create an Amazon Kinesis stream because we might have multiple data sources: Twitter Tumblr and so on This way a new stream can be created each time a new data source is added and you can take advantage of the existing application code and architecture In this example a new Amazon Kinesis stream is created to copy the raw data to Amazon S3 as well For archival long term analysis and historical reference raw data is stored into Amazon S3 Additional Amazon ML batch models can be run on the data in Amazon S3 to perform predictive analysis and track consumer buying trends As noted in the architecture diagram Lambda is used for processing and normalizing the data and requesting predictions from Amazon ML After the Amazon ML prediction is returned the Lambda function can take action s based on the prediction – for example to route a social media post to the customer service team for further review Amazon ML is used to make predictions on the input data For example an ML model can be built to analyze a social media comment to determine whether the customer expressed negative sentiment about a product To get accurate predictions with Amazon ML start with training data and ensure that your ML models are working properly If you are creating ML models for the first time see Tutorial: Using Amazon ML to Predict Responses to a Marketing Offer As ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 54 of 56 mentioned earlier if multiple social network data sources are used then a different ML model for each one is suggested to ensure prediction accuracy Finally actionable data is sent to Amazon SNS using Lambda and delivered to the proper resources by text message or email for further investigation As part of the sentiment analysis creating an Amazon ML model that is updated regularly is imperative for accurate results Additional metrics about a specific model can be graphically displayed via the console such as: accuracy false positive rate precision and recall For more information see Step 4: Review the ML Model Predictive Performance and Set a CutOff By using a combination of Amazon Kinesis Data Streams Lambda Amazon ML and Amazon SES we have create d a scalable and easily customizable social listening platform Note that this scenario does not describe creating an Amazon ML model You would create the model initially and then need to update it periodically or as workloads change to keep it accurate Conclusion As more and more data is generated and collected data analysis requires scalable flexible and high performing tools to provide insights in a timely fashion However organizations are facing a growing big data ecosystem where new tools emerge and “die” very quickly Therefore it can be very difficult to keep pace and choose the right tools This whitepaper offers a first step to help you solve this challenge With a broad set of managed services to collect process and analyze big data the AWS platform makes it easier to build deploy and scale big data applications This allow s you to focus on business problems instead of updating and managing these tools AWS provides many solutions to address your big data analytic requirements Most big data architecture solutions use multiple AWS tools to build a complete solution This approach help s meet stringent business requirements in the most costoptimized performant and resilient way possible The result is a flexible big data architecture that is able to scale along with your business ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 55 of 56 Contributors The following individuals and organizations contributed to this document: • Erik Swensson Manager Solutions Architecture Amazon Web Services • Erick Dame Solutions Architect Amazon Web Services • Shree Kenghe S olutions Architect Amazon Web Services Further Reading The following resources can help you get started in running big data analytics on AWS: • Big Data on AWS View the comprehensive portfolio of big data services as well as links to other resources such AWS big data partners tutorials articles and AWS Marketplace offerings on big data solutions Contact us if you need any help • Read the AWS Big Data Blog The blog features real life examples and ideas updated regularly to help you collect store clean process and visualize big data • Try one of the Big Data Test Drives Explore the rich ecosystem of products designed to address big data challenges using AWS Test Drives are developed by AWS Partner Network (APN) Consulting and Technology partners and are provided free of charge for education demonstration and evaluation purposes • Take an AWS training course on big data The Big Data on AWS course introduces you to cloud based big data solutions and Amazon EMR We show you how to use Amazon EMR to process data using the broad ecosystem of Hadoop tools like Pig and Hive We also teach you how to create big data environments work with DynamoDB and Amazon Redshift understand the benefits of Amazon Kinesis Streams and leverage best practices to design big data environments for security and costeffectiveness ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 56 of 56 • View the Big Data Customer Case Studies Learn from the experience of other customers who have built powerful and successful big data platforms on the AWS cloud Document Revisions Date Description December 2018 Revised to add information on Amazon Athena AWS QuickSight AWS Glue and general update s throughout January 2016 Revised to add information on Amazon Machine Learning AWS Lambda Amazon Elasti csearch Service; general update December 2014 First publication
General
Best_Practices_for_Deploying_Alteryx_Server_on_AWS
ArchivedBest Practices for Deploying Alteryx Server on AWS August 2019 This paper has been archived For the latest technical guidance on the AWS Cloud see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers/Archived Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 201 9 Amazon Web Services Inc or its affiliates All rights reserved Archived Contents Introduction 1 Alteryx Server 1 Designer 1 Scheduler 1 Controller 2 Worker 3 Database 3 Gallery 3 Options for Deploying Alte ryx Server on AWS 4 Enterprise Deployment 5 Deploy Alteryx Server with Chef 8 Deploy a Windows Server EC2 instance and install Alteryx Server 8 Deploy an Amazon EC2 Instance from the Alteryx Server AMI 8 Sizing and Scaling Alteryx Server on AWS 10 Performance Consider ations 10 Availability Considerations 14 Management Considerations 15 Sizing and Scaling Summary 15 Operations 17 Backup and Restore 17 Monitoring 17 Network and Security 18 Connecting On Premises Resources to Amazon VPC 18 Security Groups 20 Network Access Con trol Lists (NACLs) 20 Bastion Host (Jump Box) 20 Archived Secure Sockets Layer (SSL) 21 Best Practices 21 Deployment 21 Scaling and Availability 22 Network and Security 22 Performance 23 Conclusion 23 Contributors 23 Further Reading 24 Document Revisions 25 Archived Abstract Alteryx Server is a scalable server based analytics solution that helps you create publish and share analytic applications schedule and automate workflow jobs create manage and share data connec tions and control data access This whitepaper discusse s how to run Alteryx Server on AWS and provides an overview of the AWS services that relate to Alteryx Server It also includes i nformation on common architecture patterns and deployment of Alteryx Server on AWS The paper is intended for information techn ology professionals who are new to Alteryx products and are considering deploying Alteryx Server on AWSArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 1 Introduction Alteryx Server provides a scalable platform that helps create analytical insights and empowers analysts and business users across your org anization to make better data driven decisions Alteryx Server provides: • Data blending • Predictive analytics • Interactive visualizations • An easy touse drag anddrop interface • Support for a wide variety of data sources • Data governance and security • Sharing an d collaboration Alteryx Server is an end toend analytics platform for the enterprise used by thousands of customers around the world For details on how customers have successfully used Alteryx on AWS see the Alteryx + AWS Customer Success Stories Alteryx Server Alteryx Server consists of six main components : Designer Scheduler Controller Worker Database and Gallery Each component is discussed in the following sections Designer The Designer is a Windows software application that lets you create repeatable workflow processe s Designer is installed by de fault on the same instance as the Controller You can use o ther installations of the Designer (for example on your workstation) and connect it to the C ontroller using the controller tok en Scheduler The Scheduler lets you schedule the execution of workflows or analytic applications developed within the Designer ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 2 Controller The Controller orchestrates workflow execution s manages the service settings and delegates work to the Workers The Controller also supports the Gallery and handles APIs for remote integration T he Controller has t hree key parts : authentication controller token and database drivers which are described as follows Authentication Alteryx Server supports local authentication Microsoft Active Directory (Microsoft AD) authentication and SAML 20 authentication For short term trial or proof ofconcept deployments local authentication is a reasonable option However in most deployments we recommend that you use Microsoft AD or SAML 20 to connect your user directory Note: Changing authentication methods requires that you reinstall the Control ler For deployments of Alteryx Server on AWS where you have chosen Microsoft AD consider using AWS Directory Services AWS Directory Services enables Alteryx Server to use a fully managed instance of Microsoft AD in the AWS Cloud AWS Microsoft AD is bui lt on Microsoft AD and does not require you to synchronize or replicate data from your existing Active Directory to the cloud (although this remains an option for later integration as your deployment evolves over time ) For more information on this option see AWS Directory Service Controller Token The controller token connects the Controller to Workers and D esigner clients to schedule and run workflows from other Designer components The token is automatically generated when you install Alteryx Server The controller token is unique to your server instance and administrators must safeguard it You only need to regenerate the token if it is compromised If you regenerate the token all the W orker s and Gallery components must be updated with the new token Drivers Alteryx Server communicates with numerous supported data sources including databases such as Amazon Aurora and Amazon Redshift and object stores such as ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 3 Amazon S imple Storage Service (Ama zon S 3) For a complete list of supported sources see Data Sources on the Alteryx Technical Specifications page Successfully connecting to most data sources is a simple process because the Controller has a network path to the database and proper credentials to access the database with the appropriate permissions For help with troubleshooting database connections see the Alteryx Community and Alteryx Support pages Each database requires you to install the appropriate driver When using Alteryx Server be sure to configure each required database driver on the server machine with the same version that is used for Designer clients If a Designer client and the Alteryx Server do not have the same driver the scheduled workflow may not complete properly Worker The Worker executes workflows or analytic applications sent to the Controller The same instance that runs the Controller can run the Worker This setup is common in smaller scale deployments You can configure s eparate instances to run as Workers for scaling and performance purposes You must configure a t least one instan ce as a Worker —the total number of Workers you need is dependent on performance considerations Database The persistence tier store s information that is critical to the functioning of the Controller such as A lteryx application files the job queue gallery information and result data Alteryx Server supports two different databases for persistence: MongoDB and SQLite Most deploymen ts use MongoDB which can be deployed as an embedded database or as a user managed database Consider using MongoDB if you need a scalable or highly available architecture Note that m ost scalable deployments use a user managed MongoDB database Consider u sing SQLite if you do not need to use Gallery and your deployment is limited to scheduling workloads Gallery The Gallery is a web based application for sharing workflows and outputs The Gallery can be run on the Alteryx Server machine Alternatively multiple Gallery machines can be configured behind an Elastic Load Balanc ing (ELB) load balanc er to handle the Gallery services at scale ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 4 Options for Deploying Alteryx Server on AWS Alteryx Server is contained as a Microsoft Windows Service It can run easily on most Microsoft Windows Server operating systems Note: In order to install Alteryx Server on AWS you will need an AWS account and an Alteryx Server license key If you do not have a license key trial options for Alteryx Server on AW S are available through AWS Marketplace You can install the Alteryx Server components into a multi node cluster to create a scalable enterprise deployment of Alteryx Server: Figure 1: Scalable enterprise deployment of Alteryx Server Alternatively you can install Alteryx Server in one self contained EC2 instance: ArchivedAmazon Web Services Best Practices for Deploying Alte ryx Server on AWS Page 5 Figure 2: Deployment of Alteryx Server on a single EC2 instance The following sections discuss how to deploy Alteryx Server on AWS from the most complex deployment to the simplest deployment Enterprise Deployment The following architecture diagram shows a solution for a scalable enterprise deployment of Alteryx Server on AWS ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 6 Figure 3: Alteryx Server architecture on AWS The following high level steps explain how to create a scal able enterprise deployment of Alteryx Server on AWS: Note: To deploy Alteryx Server on AWS you will need the controller token to connect the Controller to Workers and Designer clients the IP or DNS information of the Controller for connection and failover if needed and the usermanaged MongoDB connection information 1 Create an Amazon Virtual Private Cloud ( VPC) or use an existing VPC with a minimum of two Availability Zones (called A vailability Zone A and Availability Zone B) ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 7 2 Deploy a Controller instance in Availability Zone A Document the controller key and connection information for later steps Note: It’s possible to use an Elastic IP address to connect remote clients and users to the Controller but we recommend that you use AWS Direct Connect or A WS Managed VPN for more complex long running deployments VPC p eering connection options and Direct Connect can enable private connectivity to the Controller instance as well as a predictable cost effective network path back to on premises data sources that you may wish to expose to the Controller 3 Create a MongoDB replica set with at least three instances Place each instance in a different Availability Zone Document the connection information for the next step 4 Connect the MongoDB cluster to the Controller instance by providing the MongoDB connection information in the Alteryx System Settings on the Controller 5 Deploy and connect a Worker instance in Availability Zone A to the Controller instance in the Availability Zone A subnet 6 Deploy and connect a Worker instance in Availability Zone B to the Controller instance in the Availability Zone A subnet 7 Deploy and connect more Workers as needed to support your desired level o f workflow concurrency You can have more than one Worker in each A vailability Zone but be aware that each A vailability Zone represents a fault domain You should also consider the performance implications of losing access to Workers deployed in a particu lar Availability Zone 8 Create an ELB l oad balancer to handle requests to the Gallery instances 9 Deploy Gallery instances and register with the ELB l oad balancer Be sure to deploy your Gallery instances in multiple Availability Zones 10 Connect the Gallery i nstances to the Controller instance ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 8 11 Connect the client Designer installations to the Controller instance using either the E lastic IP address or the optional private IP (chose n in Step 2 ) then test workf lows and publishing to Gallery 12 (Optional) Deploy a Cold/Warm Standby Controller instance in another Availability Zone or AWS R egion Failover is controlled by changing the Elastic IP address (if deployed in the same VPC) or DNS name to this Controller instance Deploy Alteryx Server with Chef You can use AWS OpsWorks with Chef cookbooks and recipes to deploy Alteryx Server For Alteryx Chef resources see cookbook alteryx server on GitHub Deploy a Windows Server EC2 instance and install Alteryx Server You can deploy an Amazon Elastic Compute Cloud (Amazon EC2) instance running Windows Server and then install Alteryx Server You can download the install package here Make sure that you deploy an instance with the recommended compute size (at least 8 vCPUs) Windows operating system (Microsoft Windows Server 2008R2 or later) and available Amazon Elastic Block Store (Amazon EBS) storage (1TB) Deploy an Amazon EC2 Instance from the Alteryx Server AMI You can purchase an Amazon Machine Image (AMI) from Alteryx through AWS Marketplace and use it to launch an Amazon Elastic Compute Cloud (Amazon EC2) instance running Alteryx Server You can find the Alteryx Server offering on AWS Marketplace Note: You can try one instance of the product for 14 days Please remember to turn your instance off once your trial is complete to avoid incurring charges You have two options for launching your Amazon EC2 instance You can launch an instance using the Amazon EC2 launch wizard in the Amazon EC2 console or by ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 9 selecting the Alteryx Server AMI in the launch wizard Note that the fastest way to deploy Alteryx Server on AWS is to launch an Amazon EC2 instance using the Marketplace website To launch Alteryx Server using the Marketplace website: 1 Navigate to AWS Marketplace 2 Select Alteryx Server then select Continue to Subscribe 3 Once subscribed select Continue to Configuration 4 Review the configura tion settings choose a nearby Region then select Continue to Launch 5 Once you have configured the options on the page as desired select Launch 6 Go to the Amazon EC2 console to view the startup of the instance 7 It can be helpful to note the Instance ID for later reference You can give the instance a friendly name to find it more easily and to allow others to know what the instance is for Click inside the Name field and enter the desired name 8 Navigate to the instance Public IP address or Pu blic DNS name in your browser Enter in your email address and take note of the token at the bottom: 9 Your token will be specific to your instance If you selected the Bring Your Own License image a similar registration will appear and prompt you for lic ense information 10 After selecting your server instance and clicking Connect you will be guided through using Remote Desktop Protocol (RDP) to connect to the Controller instance of Alteryx ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server o n AWS Page 10 11 Once connected you can use your AWS instance running Alteryx Ser ver The desktop contains links to the Designer and Server System Settings 12 Start using Alteryx Server See Alteryx Community for more information on how to use Alteryx Server and Designer Sizing and Scalin g Alteryx Server on AWS When sizing and scaling your Alteryx Server deployment consider the performance availability and management Performance Considerations This section covers options and best practices for improving the performance of your Alteryx Server workflows Scaling Up vs Scaling Out You can usually increase performance by scaling your Workers up or out To scale up you need to relaunch Workers using a larger instance type with more vCPUs or memory or by configuring faster storage When sca ling up you should increase the size of all Workers as the Controller does not schedule on specific worker instances by priority and will not assign work to the machine with the most resources To scale out you need to configure additional instances Both options typically take only a few minutes Below are two scenarios that discuss scaling up and scaling out: Long job queues – If you expect that a high number of jobs will be scheduled or if you observe that the job queue length exceeds defined limits then scale out to make sure you have enough instances to meet demand Scale up if you already have a very large number of small nodes Long running jobs or large workflows – Larger instances specifically instance types with more RAM are best suit ed for long running workloads If you find that you have longrunning jobs first examine the query logic load on the data source and network path and adjust if necessary If the jobs are otherwise well tuned consider scaling up This table presents heu ristics that can help you determine the number of Workers you need to execute workloads with different run times ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 11 Number of Users 5Second Workload 30Second Workload 1Minute Workload 2+Minute Workload Number of Worker Instances 120 1 1 2 3 2040 1 2 3 4 40100 2 3 4 5 100 3 4 5 6 Table 1: Number of Worker instances needed to execute workloads with different run times Consider having your users run some of their frequently requested workflows on a test instance of Alteryx Server of your planned instance size You can quickly deploy a test instance using the Alteryx Server AMI These tests will help you understand the number of jobs and workflow sizes that your instance size can handle To predict workflow sizes rev iew your current and planned Designer workflows In Alteryx benchmark testing the engine running in Alteryx Designer performed nearly the same as in Alteryx Server when running on similar instance types (see Alteryx Analytics Benchmarking Results )Keep this in mind when determining how long workloads will take to run You can test workload times without installing Alteryx Server by using the Designer on hardw are that is similar to what you would use to deploy Alteryx Server Scaling Based on Demand Many customers find they need to add more Workers at predictable times For peak usage times you can launch new Worker instances from the Alteryx Server AMI and pay for them using the pay asyougo option With this model you pay only for instances you need for as long as you use them This is common for seasonal or end ofmonth end ofquarter or end ofyear workloads You can use an Amazon Elastic Compute Cloud (Amazon EC2) Auto Scaling group with a script to insert the controller token into these new instances to scale additional Worker instances on demand with minimal or no post launch configuration Additionally you can integrate Amazon EC2 Auto Scaling with Amazon CloudWatch to scale automatically based on custom metrics such as the number of jobs queued Scaling Alteryx Server to more instances will have licensing implications because it is licensed by cores ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 12 Figure 4: Use Amazon EC2 Auto Scaling and Amaz on CloudWatch to scale Worker instances ondemand You can perform additional scheduled scaling actions with Amazon EC2 Auto Scaling For example you can configure an Amazon EC2 Auto Scaling group to spin up instances at the start of business hours and tur n them off automatically at the end of the day This allows Alteryx Server to reduce compute costs while meeting business analytic requirements Worker Performance Workers have several configuration settings The two settings that are the most important fo r optimizing workflow performance are simultaneous workflows and max sort/join memory Simultaneous workflows – You have the best starting point for simultaneous workflows when 4 vCPUs are available for each workflow For example if an instance has 8 vCPUs then we recommend that you enable 2 workflows to run simultaneously This setting is labeled Workflows allowed to run simultaneously in the Worker configuration interface You can adjust this setting as a way to tune performance ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 13 Note: 4 vCPU s = 1 workflows running simultaneously Max sort/join memory usage – This configuration manages the memory available to workflows that are more RAM intensive The best practice is to take t he total memory available to the machine and subtract a suggested 4 GB of memory for OS processes Then take that number and divide it by the number of simultaneous workflows assigned: Max Sort / Join Memory Usage =(Total Memory −Suggested 4GBs Operating System Memory ) #of simultaneous workflow For example for a Worker configured with 32 GB of memory and 8 vCPUs the recommended number of simultaneous workflows is 4 because there are 8 vCPUs (1 workflow for e very 2 vCPUs) In this example 4 GB of memory set aside for the OS is subtracted from 3 2 GB total memory The remaining number (28 GB) is divided by the number of simultaneous workflows (4) leaving 7 GB Therefore the recommended max sort/join memory is 7 GB Max Sort / Join Memory Usage for 32 GB Instance and 8 vCPUs = (32 GB – 4 GB) / 4 simultaneous workflows = 7 GB The following table shows a list of precomputed values for suggested max sort/join memory Instance vCPUs Suggested Simultaneo us Workflows Total Memory (GB) OS Memory (constant) (GB) Suggested Max Sort/Join Memory (GB / Th read) 4 2 16 4 6 8 4 32 4 7 16 8 32 4 35 16 8 64 4 75 32 16 128 4 78 Table 2: Examples of suggested max sort / join memory Database Performance Using a user managed MongoDB cluster allows you to control and tune the performance of the Alteryx Server persistence tier ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 14 Availability Considerations Except for the Controller you can scale out the other major Alteryx Server components to multiple inst ances Scaling the Worker Gallery and Database instances increases their availability performance or both You can create a standby Controller to ensure availability in the event of a Controller issue instance failure or Availability Zone issue For high availability you should deploy Worker Gallery and Database instances in two or three Availability Zones Consider deploying instances in more than one AWS Region for faster disaster recovery to improve interactive access to data for your regional customers and to reduce latency for users in different geographies Figure 5: High availability deployment of Alteryx Server on AWS AWS recommends that you have approximately 3 5 Worker instances 2 4 Gallery instances behind an ELB application load balancer and 3 5 Mongo Database instances configured in a Mongo DB replica set for high availability deployments The worker instances de picted above were created with Amazon EC2 auto scaling The exact ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 15 numbers and instance sizes are dependent on costs and the performance sizing specific to your organization For multi Region deployments ensure that each AWS Region has a Controller instanc e that can be used with a DNS name (Elastic IP addresses are local to a single AWS Region) We recommend using Amazon Route 53 in an active passive configuration to ensure there is only one active controller The passive controllers can be fully configured but Amazon Route 53 will only route traffic to a passive controller if the active controller becomes unavailable Management Considerations Many of the configurations we discussed allow for more flexible management of Alteryx Server Control of the pers istence tier gives you more options when replicating and backing up the databas e Placing the Gallery behind a load balancer allows for easier maintenance when upgrading or deploying Gallery instances From an operational standpoint a scaled install gives you more options and less downtime for backups monitoring database permissions and thirdparty tools Remember scaling Alteryx Server will have licensing implications based on the number of vCPUs in the deployment You need to license a ll deployed nodes regardless of function Sizing and Scaling Summary A high level overview of reasons and decisions for sizing and scaling Alter yx is given in the table below Action Performance Impact Availability Impact Management Impact Controller Scaled Up (Larger Instance Size) Can help increase Gallery performance No major impact No major impact ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 16 Action Performance Impact Availability Impact Management Impact Controlle r Scaled Out ( More Controller Instance s) No major impact Having multiple Controllers requires that one Controller is on cold or warm standby Requires customized scripts or triggers to automatically failover You can create these with AWS services such as CloudWatch and SNS Worker Scaled Up (Larger Instance Size) Decreased workflow completion times For best results use instance types with more memory or optimized memory No major impact No major impact Worker Scaled Out (More Worker Instances) More concurrent workflows can be run More resiliency to Worker instance failures Reduced downtime during maintenance Gallery Scaled Out (More Gallery Instances) Better performance for more Gallery users More resiliency to Gallery instance failures Reduced downtime during maintenance User Managed MongoDB database More control for tuning and performance Clust ering and replication in MongoDB allow for higher availability Give you more control over the database but require s some knowledge about NoSQL databases Table 3: Scaling actions and impact on performance availability and management When considering Alteryx Server deployment options and which components to scale it's best to consider your organization ’s performance availability and management needs For example your organiza tion may have a few users creating analytic workflows but hundreds of users consuming those workflows via the Gallery In that ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 17 case you might need minimal infrastructure to handle analytic workflows and the database while the Controller which aids the G allery instances would need to be a larger instance and the Gallery instances would be best served using several instances behind a l oad balancer If you are concerned with data loss you should create a user managed MongoDB cluster and make sure that it is backed up regularly to multiple locations Operations This section discusses backup restore and monitoring operations Backup and Restore You can use the Amazon Elastic Block Store ( EBS) snapshot feature to back up the Controller Worker and Database instances You can use these s napshots to restore data in the event of a failure It is best to stop the Controller and Database tier before a snapshot The Gallery is stateless and does not need to be backed up For details on how to perform backup and recovery operations i f you are using a user managed MongoDB database see the MongoDB documentation for Amazon EC2 Backup and Restore Monit oring AWS provides robust monitoring of Amazon E lastic Compute Cloud (Amazon E C2) instances Amazon EBS volumes and other services via Amazon CloudWatch Amazon CloudWatch can be triggered to send a notification via Amazon Simple Notification Service (Ama zon SNS) or email upon meeting userdefined thresholds on individual AWS services Amazon CloudWatch can also be configured to trigger an auto recov ery action on instance failure You can also write a custom metric to Amazon CloudWatch for example to mo nitor current queue sizes of workflows in your Controller and to alarm or trigger automatic responses from those measures By default these metrics are not available from Alteryx Server but can be parsed from Alteryx logs and custom workflows and exposed to CloudWatch using Amazon CloudWatch Logs You can also use t hirdparty monitoring tools to monitor status and performance for Alteryx Server A free analytics workflow and app lication is available for reviewing ArchivedAmazon Web Services Best Practice s for Deploying Alteryx Server on AWS Page 18 Alteryx Server performance and logs You ca n get that tool from the Alteryx support community Network and Security This section covers network and security considerations for Alteryx Server deployment Connecting OnPremise s Resources to Amazon VPC In order for Alteryx Server to access your on premises data sources connect an Amazon Virtual Private Cloud (Amazon VPC) to your onpremise s resources In the following figure the private subnet contain s Alteryx Server You can place all t he Gallery services in a public subnet (not shown ) for simple access to the internet and users or you can configure AWS Direct Connect or use VPN to enable a private peering connection with no public IP addressing required You can also place Gallery instances or Alteryx Server in the private subnets with configuration of NAT Gateway Scaling hybrid or disaster recovery options are also available in this model with elements of Altery x Server deployed as need ed either on premises or on AWS ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 19 Figure 6: Options for connecting on premises services to Alteryx Server on AWS Alteryx Server often uses information stored on private corporation resources Be aware of the performance and traffi c implications of accessing large amounts of data that are outside of AWS AWS offers a several solutions to handle this kind of expected traffic You can provision a VPN connection to your VPC by provisioning an AWS Managed VPN Connection AWS VPN CloudHu b or a third party software VPN appliance running on an Amazon EC2 instance deployed in your VPC We recommend using AWS Direct Connect to connect to private data sources outside of AWS as it provides a predictable low cost and high performance dedicate d peering connection You can also use VPN with Direct Connect to fully encrypt all traffic This approach fits well into risk and security compliance standards for many corporations You may already be using Direct Connect to connect with an existing AWS deployment It is possible to share Direct Connect and create connections to multiple VPCs even across AWS accounts or to provision access to remote regions While possible it is not ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 20 recommended to connect to data sources directly over the internet from a public subnet due to security concerns See the Direct Connect documentation for more details on a variety of connectivity scenarios see the AWS Direct Connect docu mentation Security Groups When running Alteryx Server on AWS be sure to check your security group settings when attempting to add a connection to a data source You will need to customize your security groups based on your needs as some data sources may require specific ports Refer to the data source documentation on the specific source you are connecting to and the ports and protocols used for traffic Port Permitted Traffic 3389 RDP Access 80 HTTP Web Traffic 443 HTTPS Web Traffic 81 Used Only with AWS Marketplace Offering for Client Connections 5985 Used Only with AWS Marketplace Offering for Windows Management Table 4: Security Groups for Alteryx Server Network Access Control Lists (NACLs) Amazon VPC and Alteryx Server support NACLs as an optional additional network security component NACLs are not stateful and tend to be more restrictive and so they are not recommended for general deployments The y may be useful for organizations with specific compliance concerns or other internal security requirements NACLs are supported for controlling network traffic that relates to Alteryx Server Bastion Host (Jump Box) In the case that Alteryx Server components are placed in a private subnet we recommend that a bastion host or jump box is placed in the public subnet with security group rules to allow traffic between the public jump box and the private server This adds another level of control and help s limit the types of conne ctions that can reach the Alteryx Server For details on bastion host deployment on AWS see the Linux Bastion Hosts on the AWS Cloud Quick Start ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 21 Secure Sockets Layer (SSL) The Gallery component of Alteryx Server is available over HTTP or HTTPS If you deploy gallery instances in a public subnet we recommend HTTPS For information on how to properly configure TLS see the Alteryx Server documentation Best Practices The following sections summarize best practices and tips for deploying Alteryx Server on AWS Deployment • Deploy Alteryx Server on an in stance that meets the minimum requirements: Microsoft Windows Server 2008R2 (or later) at least 8 vCPUs and at least 1TB of Amazon Elastic Block Store (Amazon EBS) storage • Do not change the Alteryx Server Authentication Mode once it has been set Changi ng the Authentication Mode requires that you reinstall Microsoft Windows Active Directory (Microsoft AD) or SAML 20 are the recommended authentication methods • The controller token is unique to each Alteryx Server installation and administrators must sa feguard it • Be sure to configure each required database driver on the server machine with the same version that is used for designer clients • Alteryx Server supports two different mechanisms for persistence: MongoDB and SQLite Choose MongoDB if you need a scalable or highly available architecture Choose SQLite if you do not need to use Gallery and your deployment is limited to scheduling workloads • Worker instances Gallery instances and user managed MongoDB instances can be scaled for deployments suppor ting user groups of 20 or more • If you use the pay asyougo AWS Marketplace image for test purposes be sure to note the 14 day trial period and remember to turn your instance off once your trial is complete ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 22 Scaling and Availability • For a more resilient architecture be sure to scale out worker Gallery and persistence instances across multiple Availability Zones Consider deploying instances across AWS Regions to reduce latency for users in different geographies or to improve access to data • Multiple Ga llery instances can be configured behind a load balancer to handle the Gallery services at scale • When scaling Worker instances you should increase the size of all Worker instances as the Controller does not schedule on specific worker instances by priori ty • A standby Controller can be deployed for failover AWS tools such as AWS CLI Amazon Route 53 and Amazon CloudWatch can help automate failover • Scaling Alteryx Server to more instances will likely have licensing implications because it is licensed by c ores Network and Security • Alteryx Server on AWS commonly process information stored on premises Be aware of the potential performance and cost implications of using large amounts of data outside of AWS • When using Alteryx Server on AWS ensure that you c heck your security group settings when attempting to add a connection to a data source You will need to customize security groups based on your needs as some data sources may require specific ports Refer to documentation on the specific database you are connecting to and the ports and protocols used for traffic • Amazon VPC and Alteryx Server support NACLs as an optional additional network security component NACLs may be useful for organizations with specific compliance concerns or other internal security requirements • Be sure your Alteryx Designer clients have connectivity to any Controllers you plan to schedule workflows on This is an easily missed requirement when Alteryx Server is deployed in the cloud ArchivedAmazon Web Services Best Practices for Deploy ing Alteryx Server on AWS Page 23 Performance • Instance types with a larger ratio of memory to vCPUs will often run Alteryx workflows faster Consider EC2 memory optimized instances types such as the R4 when working to improve performance • We recommend two VPCs per simultaneous workflow • The user defined Controller setting max so rt/join memory manages the memory available to workflows that are RAM intensive The best practice is to take total memory available to the machine and subtract a suggested 4 GBs of memory for OS processes Then take that number and divide it by the number of simultaneous workflows assigned For example: 32 GBs – 4 = 28 GBs / 4 simultaneous workflows = 7 GBs max sort/join memory • For workflows using geo spatial tools use EBS Provisioned IOPS SSD (io1) or EBS General Purpose SSD (gp2) volumes that have been optimized for I/O intensive tasks to increase performance Conclusion AWS lets you deploy scalable analytic tools such as Alteryx Server Using Alteryx Server on AWS is a cost effective and flexible way to manage and deploy various configurations of Alter yx Server In this whitepaper we have discussed several considerations and best practices for deploying Alteryx Server on AWS Please send comments or feedback on this paper to the papers authors or helpfeedback@alteryxcom Contributors The following individuals and organizations contributed to this document: • Mike Ruiz Solutions Architect AWS • Claudine Morales Solutions Architect AWS • Matt Braun Product Manager Alteryx • Mark Hayford Amazon Web Services Architect Alteryx ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 24 Further Reading For additional information see the following: • Alteryx Community • Alteryx Knowledge Base • Alteryx Server Install Guide • Alteryx SSL Information • Alteryx Documentation ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 25 Document Revisions Date Description August 2019 Edits to clarify information about Simultaneous workflows August 2018 First publication
General
An_Overview_of_AWS_Cloud_Data_Migration_Services
This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/overviewawscloud datamigrationservices/overviewawsclouddatamigrationserviceshtmlAn Overview of AWS Cloud Data Migration Services Published May 1 2016 Updated June 13 2021 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Contents Introduction 1 Cloud Data Migration Challenges 2 Security and Data Sensitivity 2 Cloud Data Migration Tools 5 Time and Performance 6 Choosing a Mig ration Method 7 Selfmanaged Migration Methods 8 AWS Managed Migration Tools 9 Cloud Data Migration Use Cases 18 Use Case 1: One Time Massive Data Migration 18 Use Case 2: Continuous On premises Data Migration 21 Use Case 3: Continuous Streaming Data In gestion 25 Conclusion 26 Contributors 26 Further Reading 26 Document revisions 27 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Abstract One of the most challenging steps required to deploy an application infrastructure in the cloud is moving data into and out of the cloud Amazon Web Services (AWS) provides multiple services for moving data and each solution offers various levels of speed security cost and performance This white paper outlines the different AWS services that can help seamlessly transfer data to and from the AWS Cloud This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Mi gration Services 1 Introduction As you plan your data migration strategy you will need to determine the best approach to use based on the specifics of your environment There are many different ways to lift andshift data to the cloud such as onetime large batches constant device streams intermittent updates or even hybrid data storage combining the AWS Cloud and on prem ises data stores These methods can be used individually or together to help streamline the realities of cloud data migration projects This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 2 Cloud Data Migration Challenges When planning a data migration you need to determine how much data is being moved and the bandwidth available for the transfer of data This will determine how long the transfer will take AWS offers several methods to transfer data into your account including the AWS Snow Family of storage devi ces AWS Direct Connect and AWS SitetoSite VPN over your existing internet connectio n The network bandwidth that is consume d for data migration will not be available for your organization’s typical application traffic In addition your organization might be concerned with moving sensitive business information from your internal network to a secure AWS environment Determining the security level for your organization helps you select the appropriate AWS services for your data migration Security and Data Sensitivity When customers migrate data ensuring the security of data both in transit and at rest is critical AWS takes security very seriously and build s security features into all data migration services Every service uses AWS Identity and Access Management (IAM) to control programmatic and AWS Console access to resources The following table lists these featur es Table 1 – AWS Services Security Features AWS Service Security Feature s AWS Direct Connect • Provides a dedicated physical connection with no data transfer over the Internet • Integrates with AWS CloudTrail to capture API calls made by or on behalf of a customer account AWS Snow Family • Integrates with the AWS Key Manage ment Service (AWS KMS) to encrypt data atrest that is stored on AWS Snow cone Snowball or Snowmobile • Uses an industry standard Trusted Platform Module (TPM) that has a dedicated processor designed to detect any unauthorized modifications to the hardware firmware or software to physically secure the AWS Snowcone or Snowball device This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 3 AWS Service Security Feature s AWS Transfer Family • SFTP use s SSH while FTPS use s TLS to transfer data through a secure and encrypted channel • AWS Transfer Family is PCI DSS and GDPR compliant and HIPAA eligible The service is also SOC 1 2 and 3 compliant Learn more about services in scope grouped by compliance programs • The service supports three modes of authentication: Service Managed where you store user identities within the service Microsoft Active Directory and Custom (BYO) which enables you to i ntegrate an identity provider of your choice Service Managed authentication is supported for server endpoints that are enabled for SFTP only • You can use Amazon CloudWatch to monitor your end users’ activ ity and use AWS CloudTrail to access a record of all S3 API operations invoked by your server to service your end users’ data requests This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 4 AWS Service Security Feature s AWS DataSync • All data transferred between the source and destination is encrypted via Transport Layer Security (TLS) w hich replaced Secure Sockets Layer (SSL) Data is never persisted in AWS DataSync itself The service supports using default encryption for S3 buckets Amazon EFS file system encryption of data at rest and Amazon FSx For Window s File Server encryption at rest and in transit • When copying data to or from your premises there is no need to setup a VPN/tunnel or allow inbound connections Your AWS DataSync agent can be configured to route through a firewall using standard network p orts • Your AWS DataSync agent connects to DataSync service endpoints within your chosen AWS Region You can choose to have the agent connect to pu blic internet facing endpoints Federal Information Processing Standards (FIPS) validated endpoints or endpoints within one of your VPCs AWS Storage Gateway • Encrypts all data in transit to and from AWS by using SSL/TLS • All data in AWS Storage Gateway is encrypted at rest using AES 256 while data transfers are encrypted with AES128 GCM or AES 128 CCM • Authentication between your gateway and iSCSI initiators can be secured by using Challenge Handshake Authentication Protocol (CHAP) This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 5 AWS Service Security Feature s Amazon S3 Transfer Acceleration • Access to Amazon S3 can be restricted by granting other AWS accounts and users permission to perform the resource operations by writing an access policy • Encrypt dat a atrest by performing server side encryption using Amazon S3 Managed Keys (SSE S3) AWS Key Management Service (KMS) Managed Keys (SSE KMS) or Customer Provided Key s (SSE C) Or by performing client side encryption using AWS KMS –Managed Customer Master Key (CMK) or Client Side Master Key • Data in transit can be secured by us ing SSL /TLS or client side encryption • Enable Multi Factor Authentication (MFA) Delete for an Amazon S3 bucket AWS Kinesis Data Firehose • Data in transit can be secured by using SSL /TLS • If you send data to your delivery stream using PutRecord or PutRecordBatch or if you send the data using AWS IoT Amazon CloudWatch Logs or CloudWatch Events you can turn on server side encryption by using the StartDeliveryStreamEncryption operation • You can also enable SSE when you create the delivery stream Cloud Data Migration Tools This section discusses managed and self managed migration tools with a brief description of how each solution works Y ou can select AWS managed or selfmanaged migration methods and make your choice based on your specific use case This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 6 Time and Performance When you migrate data from your onpremises storage to AWS storage services you want to take the least amount of time to move data over your internet connection with minimal disruption to the existing systems To calculate the number of days required to migrate a given amount of data you can use the following formula: Number of Days = (Terabytes * 8 bites per Byte)/(CIRCUIT gigabits per second * NETWORK_UTILIZATION percent * 3600 seconds per hour * AVAILABLE_HOURS ) For example if you have a n GigabitEthernet connection (1 Gbps) to the Internet and 100 TB of data to move to AWS theoretically the minimum time it would take over the network connection at 80 percent utilization is approximately 28 days (100000000000000 Bytes * 8 bits per byte ) /(1000000000 bps * 80 percent * 3600 seconds per hour * 10 hours per day ) = 2777 days If this amount of time is not practical for you there are many ways to reduce migration time for large amounts of data You can use AWS managed migration tools that automate dat a transfers and optimize your internet connection to the AWS Cloud Alternatively you may develop or purchase your own tools and create your own transfers processes that the utilize the native HTTP interfaces to Amazon Simple Storage Service (Amazon S3) For moving small amounts of data from your on site location to the AWS Cloud you may use ad hoc methods that get the job done quickly with minimal use of automation methods discussed in the AWS migration tools section For the best results we suggest th e following: Table 2 – Recommended migration methods Connection & Data Scale Method Duration Less than 10 Mbps & Less than 100 GB Selfmanaged ~ 3 days Less than 10 Mbps & Between 100 GB – 1 TB AWS Managed ~ 30 days Less than 10 Mbps & Greater than 1 TB AWS Snow Family ~ weeks Less than 1 Gbps & Between 100 GB – 1 TB Selfmanaged ~ days This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 7 Connection & Data Scale Method Duration Less than 1 Gbps & Greater than 1 TB AWS Managed / Snow Family ~ weeks Choosing a Migration Method There are several factors to consider when choosing the appropriate migration method and tool As discussed in the previous section time allocated to perform data transfers the volume of data and network speeds influence the decision between different d ata migration methods You should also consider for each data store server or application stack the number of repetitive steps required to transfer data from source to target Then evaluate the variance of these steps as they are repeated In other wo rds are there unique requirements per data store that require non trivial changes to the data migration procedures? Then evaluate the level of existing investments in custom tooling and automation in your organization You will need to determine if it is more worthwhile to use existing selfmanaged tooling and automation or sunset them in favor of managed services and tools You can use following decision tree as a framework to choose a suitable migration method and tool: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 8 Figure 1 Migration Method Decision Tree Selfmanaged Migration Methods Small one time data transfers on limited bandwidth connections may be accomplished using these very simple tools This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Mi gration Services 9 Amazon S3 AWS Command Line Interface For migrating small amounts of data you can use the Amazon S3 AWS Command Line Interface to write commands that move data into an Amazon S3 bucket You can upload objects up to 5 GB in size in a single operation If your object is greater than 5 GB you can use multipart upload Multipart uploading is a three step process: You initiate the upload you upload the object parts and after you have uploaded all the parts you complete the multipart upload Upon receiving the complete multipart upload request Amaz on S3 constructs the object from the uploaded parts Once complete you can access the object just as you would any other object in your bucket Amazon Glacier AWS Command Line Interface For migrating small amounts of data you can write commands using the Amazon Glacier AWS Command Line I nterface to move data into Amazon Glacier In a single operation you can upload archives from 1 byte to up to 4 GB in size However for archives greater than 100 MB in size we recommend using multipart upload Using the multipart upload API you can upload larg e archives up to about 40000 GB (10000 * 4 GB) Storage Partner Solutions Multiple Storage Partner solutions work seamlessly to access storage across on premises and AWS Cloud envi ronments Partner hardware and software solutions can help customers do tasks such as backup create primary file storage/cloud NAS archive perform disaster recovery and transfer files AWS Managed Migration Tools AWS has designed several sophisticated services to help with cloud data migration AWS Direct Connect AWS Direct Connect lets you establish a dedicated network connection between your corporate network and one AWS Direct Connect location Using this connection you can create virtual interfaces directly to AWS services This bypasses Internet service providers (ISPs) in you r network path to your target AWS region By setting up private connectivity over AWS Direct Connect you could reduce network costs increase bandwidth throughput and provide a more consistent network experience than with Internet based connections This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 10 Using AWS Direct Connect you can easily establish a dedicated network connection from your premises to AWS at speeds starting at 50 Mbps and up to 100 Gbps You can use the connection to access Amazon Virtual Privat e Cloud (Amazon VPC) as well as AWS public services such as Amazon S3 AWS Direct Connect in itself is not a data transfer service Rather AWS Direct Connect provides a high bandwidth connection that can be used to transfer data between your corporate n etwork and AWS with more consistent performance and without ever having the data routed over the Internet Encryption methods may be applied to secure the data transfers over the AWS Direct Connect such as AWS Site toSite VPN AWS APN Partners can help you set up a new connection between an AWS Direct Connect location and your corporate data center office or colocation facility Additionally many of our partners offer AWS Direct Connect Bundles that provide a set of advanced hybrid architectures that can reduce complexity and provide peak performance You can extend your on premises networking security storage and compute technologi es to the AWS Cloud using managed hybrid architecture compliance infrastructure managed security and converged infrastructure With 108 Direct Connect locations worldwide and more than 50 Direct Connect delivery partners you can establish links between your on premises network and AWS Direct Connect locations With AWS Direct C onnect you only pay for what you use and there is no minimum fee associated with using the service AWS Direct Connect has two pricing components: porthour rate (based on port speed) and data transfer out (per GB per month) Additionally i f you are using an APN partner to facilitate a n AWS Direct Connect connection contact the partner to discuss any fees they may charge For information about pricing see Amazon Direct Connect Pricing AWS Snow Family The AWS Snow Family accelerates moving large amounts of data into and out of AWS using AWS managed hardware and software The Snow Family comprised of AWS Snowcone AWS Snowball and AWS Snowmobile are various physical devices each with different form factors and capacities They are purpose built for efficient data storage and transfer and have built in compute capabilities The AWS Snowcone device is a lightweight handheld storage device that accommodates field environments where access to power may be limited and WiFi is necessary to make the connection An AWS Snowball Edge device is rugged enough to withstand a 70 G shock and at 497 pounds (2254 kg) it is light enough for one person to carry It is entirely self contained with This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 11 110240 VAC power ships with country specific power cables as well as an E Ink display and control panel on the front Each AWS Snowball Edge appliance is weather resistant and serves as its own shipping con tainer With AWS Snowball you have the choice of two devices as of the date of this writing Snowball Edge Compute Optimized with more computing capabilities suited for higher performance workloads or Snowball Edge Storage Optimized with more storage which is suited for large scale data migrations and capacity oriented workloads Snowball Edge Compute Optimized provides powerful computing resources for use cases such as machine learning full motion video analysis analytics and local computing stacks These capabilities include 52 vCPUs 208 GiB of memory and an optional NVIDIA Tesla V100 GPU For storage the device provides 42 TB usable HDD capacity for S3 compatible object storage or EBS compatible block volumes as well as 768 TB of usable NVMe SSD capacity for EBS compatible block volumes Snowball Edge Compute Optimized devices run Amazon EC2 sbe c and sbe g instances which are equivalent to C5 M5a G3 and P3 instances Snowball Edge Storage Optimized devices are well suited for large scale data migrations and recurring transfer workflows as well as local computing with higher capacity needs Snowball Edge Storage Optimized provides 80 TB of HDD capacity for block volumes and Amazon S3 compatible object storage and 1 TB of SATA SSD for block volumes For computing resources the device provides 40 vCPUs and 80 GiB of memory to support Amazon EC2 sbe1 instances (equivalent to C5) AWS transfers your data directly onto Snowball Edge device using on premises high speed connections ships the device to AWS facilities and transfers data off of AWS Snowball Edge devices using Amazon’s high speed internal network The data transfer process bypass es the corporate Internet connection and mitigates the requirement for an AWS Direct Connect services For datasets of significant size AWS Snowball is often faster than transferring data via the Internet and more cost effective than upgrading your data center’s Internet connection AWS Snowball supports importing data into and exporting data from Amazon S3 buckets From there the data can be copied or moved to other AWS services such as Amazon Elastic Block Store ( Amazon EBS) Amazon Elastic File System (Amazon EFS) Amazon FSx File Gateway and Amazon Glacier AWS Snowball is ideal for transferring large amounts of data up to many petabytes in and out of the AWS cloud securely This approach is effective especially in cases where you don’t want to make expensive upgrades to your network infrastructure ; if you frequently experience large backlogs of data ; if you are in a physically isolated This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 12 environment ; or if you are in an area where high speed Internet connections are not available or cost prohibitive In general if loading your data over the Internet would take a week or more you should consider using AWS Snow Family Common use cases include cloud migration disaster recovery d ata center decommission and content distribution When you d ecommission a data center many steps are involved to make sure valuable data is not lost and the AWS Snow Family can help ensure data is securely and cost effectively transferred to AWS In a content distribution scenario you might u se Snowball Edge devices if you regularly receive or need to share large amounts of data with clients customers or business partners Snowball appliances can be sent directly from AWS to client or customer locations If you need to move massive amounts of data AWS Snowmobile is an Ex abyte scale data transfer service Each Snowmobile is a 45 foot long ruggedized shipping container hauled by a trailer truck with up to 100 PB data storage capacity Snowmobile also handles all of the logistics AWS personnel transport and configure the Sn owmobile They will also work with your team to connect a temporary high speed network switch to your local network The local high speed network facilitates rapid transfer of data from within your datacenter to the Snowmobile Once you’ve loaded all your data the Snowmobile drives back to AWS where the data is imported into Amazon S3 Moving data at this massive scale requires additional preparation precautions and security Snowmobile uses GPS tracking round the clock video surveillance and dedicated security personnel Snowmobile offers an optional security escort vehicle while your data is in transit to AWS Management of and access to the shipping container and data stored within is limited to AWS personnel using hardware secur e access control meth ods AWS Snow Family might not be the ideal solution if your data can be transferred over the Internet in less than one week or if your applications cannot tolerate the offline transfer time With AWS Snow Family as with most other AWS services you pay only for what you use Snowball has three pricing components: a service fee (per job) extra day charges as required and data transfer out The first 5 days of Snowcone usage and the first 10 days of onsite Snowball includes 10 days of device use For the destination storage the standard Amazon S3 storage pricing applies For pricing information see AWS Snowball pricing Snowmobile pricing is based on the amount of data stored on the truck p er month For more information about AWS Regions and availability see AWS Regional Services This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 13 AWS Storage Gateway AWS Storage Gateway makes backing up to the cloud extremely simple It connects an onpremises software appliance with cloud based storage to provide seamless and secure integration between an organization’s on premises IT environment and the AWS storage infrastructure The service enables you to securely store data in the AWS Cloud for scalable and cost effective storage AWS Storage Gateway supports three types of storage interfaces used in on premises environment including file volume and tap e It uses industry standard network storage protocols such as Network File System (NFS) and Server Message Block (SMB) that work with your existing applications enabling data storage using S3 File Gateway function to store data in Amazon S3 It provides lowlatency performance by maintaining an on premises cache of frequently accessed data while securely storing all of your data encrypted in Amazon S3 Once data is stored in Amazon S3 it can be archived in Amazon S3 Glacier For disaster re covery scenarios AWS Storage Gateway together with Amazon Elastic Compute Cloud (Amazon EC2) can serve as a cloud hosted solution that mirrors your entire production environment You can download the AWS Storage Gateway software appliance as a virtual machine (VM) image that you install on a host in your data center or as an EC2 instance After you’ve installed your gateway and associated it with your AWS account through the AWS activation process you can use the AWS Management Console to create gatewa ycached volumes gateway stored volumes or a gateway –virtual tape library (VTL) each of which can be mounted as an iSCSI device by your on premises applications Volume Gateway supports iSCSI connections that enable storing of volume data in S3 With caching enabled you can use Amazon S3 to hold your complete set of data while caching some portion of it locally for onpremises frequently accessed data Gateway cached volumes minimize the need to scale your on premises storage infrastructure while sti ll providing your applications with low latency access to frequently accessed data You can create storage volumes up to 32 T iB in size and mount them as iSCSI devices from your on premises application servers Each gateway configured for gateway cached vo lumes can support up to 32 volumes and total volume storage per gateway of 1024 Ti B Data written to these volumes is stored in Amazon S3 with only a cache of recently written and recently read data stored locally on your on premises storage hardware Gateway stored volumes store your locally sourced data in cache while asynchronously backing up data to AWS These volumes provide your on premises applications with lowlatency access to their entire datasets while providing durable off site backups This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 14 You can create storage volumes up to 1 6 TiB in size and mount them as iSCSI devices from your on premises application servers Each gateway configured for gateway stored volumes can support up to 32 volumes with a total volume storage of 512 TiB Data written to your gateway stored volumes is stored on your on premises storage hardware and asynchronously backed up to Amazon S3 in the form of Amazon EBS snapshots A gateway VTL allows you to perform offline data archiving by presenting your existing backup a pplication with an iSCSI based VTL consisting of a virtual media changer and virtual tape drives You can create virtual tapes in your VTL by using the AWS Management Console and you can size each virtual tape from 100 G iB to 5 T iB A VTL can hold up to 1 500 virtual tapes with a maximum aggregate capacity of 1 PiB After the virtual tapes are created your backup application can discover them using its standard media inventory procedure Once created tapes are available for immediate access and are stor ed in Amazon S3 Virtual tapes you need to access frequently should be stored in a VTL Data that you don't need to retrieve frequently can be archived to your virtual tape shelf (VTS) which is stored in Amazon Glacier further reducing your storage costs Organizations are using AWS Storage Gateway to support a number of use cases These use cases include corporate file sharing enabling existing on premises backup applications to store primary backups on Amazon S3 disaster recovery and mir roring data to cloud based compute resources and then later archiving the data to Amazon Glacier With AWS Storage Gateway you pay only for what you use AWS Storage Gateway has the following pricing components: gateway usage (per gateway appliance per month) and data transfer out (per GB per month) Based on type of gateway appliance you use there are snapshot storage usage (per GB per month) and volume storage usage (per GB per month) for gateway cached volumes/gateway stored v olumes and virtual tape shelf storage (per GB per month) virtual tape library storage (per GB per month) and retrieval from virtual tape shelf (per GB) for Gateway Virtual Tape Library For information about pricing see AWS Storage Gateway pricing Amazon S3 Transfer Acceleration (S3 TA) Amazon S3 Transfer Acceleration (S3 TA) enables fast easy and secure transfers of files over long distances between your client and your Amazon S3 bucket Transfer Acceleration lever ages Amazon CloudFront globally distributed AWS edge locations As This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 15 data arrives at an AWS edge location data is routed to your Amazon S3 bucket over an optimized network path Transfer Acceleration helps you fully utilize your bandwidth minimize the effe ct of distance on throughput and ensure consistently fast data transfer to Amazon S3 regardless of your client’s location Acceleration primarily depends on your available bandwidth the distance between the source and destination and packet loss rates o n the network path Generally you will see more acceleration when the source is farther from the destination when there is more available bandwidth and/or when the object size is bigger You can use the online speed comparison tool to get the preview of the performance benefit from uploading data from your location to Amazon S3 buckets in different AWS Regions using Transfer Acceleration Organ izations are using Transfer Acceleration on a bucket for a variety of reasons For example they have customers that upload to a centralized bucket from all over the world transferring gigabytes to terabytes of data on a regular basis across continents or having underutilize d the available bandwidth over the Internet when uploading to Amazon S3 The best part about using Transfer Acceleration on a bucket is that the feature can be enabled by a single click of a button in the Amazon S3 console; this makes the accelerate endpoint available to use in place of the regular Amazon S3 endpoint With Tra nsfer Acceleration you pay only for what you use and for transferring data over the accelerated endpoint Transfer Acceleration has the following pricing components: data transfer in (per GB) data transfer out (per GB) and data transfer between Amazon S3 and another AWS Region (per GB) Transfer acceleration pricing is in addition to data transfer (per GB per month) pricing for Amazon S3 For information about pricing see Amazon S3 pricing AWS Kinesis Data Firehose Amazon Kinesis Data Firehose is the easiest way to load streaming data into AWS The service can capture and automatically load st reaming data into Amazon S3 Amazon Redshift Amazon Elasticsearch Service or Splunk Amazon Kinesis Data Firehose is a fully managed service making it easier to capture and load massive volumes of streaming data from hundreds of thousands of sources The service can automatically scale to match the throughput of your data and requires n o ongoing administration Additionally Amazon Kinesis Data Firehose c an also batch compress transform and encrypt data before loading it This process minimiz es the amount of storage used at the destination and increas es security This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 16 You can use Data Firehose by creating a delivery stream and sending the data to it The streaming data originators are called data producers A producer can be as simple as a PutRecord() or PutRecordBatch() API call or you can build your producers using Kinesis Agent You can send a record (before base64 encoding) as large as 1000 KiB Additionally Firehose buffers incoming streaming data to a certain size called a Buffer Size (1 MiB to 12 8 MiB) or for a certain period of time called a Buffer Interval (60 to 900 seconds) before delivering to destinations With Amazon Kinesis Data Firehose you pay only for the volume of data you transmit through the service Amazon Kinesis Data Firehose has a single pricing component: data ingested (per G iB) which is calculated as the number of data records you send to the service times the size of each record rounded up to the nearest 5 KiB There may be charges associated with PUT requests a nd storage on Amazon S3 and Amazon Redshift and Amazon Elasticsearch instance hours based on the destination you select for loading data For information about pricing see Amazon Kinesis Da ta Firehose pricing AWS Transfer Family If you are looking to modernize your file transfer workflows for business processes that are heavily dependent on FTP SFTP and FTPS ; the AWS Transfer Family service provides fully managed file transfers in and out of Amazon S3 buckets and Amazon EFS shares The AWS Transfer Family uses a highly available multi AZ architecture that automatically scales to add capacity based on your file transfer demand This means no more FTP SFTP and FTPS servers to manage The AWS Transfer Family allows the authentication of users through multiple methods including self managed AWS Directory Service on premises Active Directory systems through AWS Managed Microsoft AD connectors or custom identity providers Custom identity pr oviders may be configured through the Amazon API Gateway enabling custom configurations DNS entries used by existing users partners and applications are maintained using Route 53 for minimal disruption and seamless migration With your data residing in Amazon S3 or Amazon EFS you can use other AWS services for analytics and data processing workflows There are many use cases that require a standards based file transfer protocol like FTP SFTP or FTPS AWS Transfer Family is a good fit for secure file sharing between an organization and third parties Examples of data that are shared between organizations are l arge files such as audio/video media files technical documents research data and EDI data such as purchase orders and invoices Another u se case is providing a central location where users can download and globally access your data This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Mi gration Services 17 securely A third use case is to facilitate data ingestion for a data lake Organizations and third parties can FTP SFTP or FTPS research analytics or busine ss data into an Amazon S3 bucket which can then be further processed and analyzed With the AWS Transfer Family you only pay for the protocols you have enabled for access to your endpoint and the amount of data transferred over each of the protocols There are no upfront costs and no resources to manage yourself You select the protocols identity provider and endpoint configuration to enable transfers over the chosen protocols You are billed on an hourly basis for each of the protocols enabled to acce ss your endpoint until the time you delete it You are also billed based on the amount of data (Gigabytes) uploaded and downloaded over each of the protocols For more details on pricing per region see AWS Transfer Family pricing Third Party Connectors Many of the most popular third party backup software packages such as CommVault Simpana and Veritas NetBackup include Amazon S3 connectors This allows the backup software to point direc tly to the cloud as a target while still keeping the backup job catalog complete Existing backup jobs can simply be rerouted to an Amazon S3 target bucket and the incremental daily changes are passed over the Internet Lifecycle management policies can m ove data from Amazon S3 into lower cost storage tiers for archival status or deletion Eventually and invisibly local tape and disk copies can be aged out of circulation and tape and tape automation costs can be entirely removed These connectors can be used alone or they can be used with a gateway provided by AWS Storage Gateway to back up to the cloud without affecting or re architecting existing on premises processes Backup administrators will appreciate the integration into their d aily console activities and cloud architects will appreciate the behind the scenes job migration into Amazon S3 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 18 Cloud Data Migration Use Cases Use Case 1: One Time Massive Data Migration Figure 2 Onetime massive data migra tion In use case 1 a customer goes through the process of decommissioning a data center and moving the entire workload to the cloud First all the current corporate data needs to be migrated To complete this migration AWS Snowball appliances are used to move the data from the customer’s existing data center to an Amazon S3 bucket in the AWS Cloud 1 Customer creates a new data transfer job in the AWS Snowball Management Console by providing the following information a Choose Import into Amazon S3 to start c reating the import job b Enter the shipping address of the corporate data center and shipping speed (one or two day) c Enter job details such as name of the job destination AWS Region destination Amazon S3 bucket to receive the imported data and Snowba ll Edge device type This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 19 d Enter security settings indicating the IAM role Snowball assumes to import the data and AWS KMS master key used to encrypt the data within Snowball e Set Amazon Simple Notification Service (SNS) notification options and provide a list o f comma separated email addresses to receive email notifications for this job Choose which job status values trigger notifications f Download AWS OpsHub for Snow family to manage yo ur devices and their local AWS services With AWS OpsHub you can unlock and configure single or clustered devices transfer files and launch/manage instances running on Snow Family devices 2 After the job is created AWS ships the Snowball Appliances to th e customer data center by AWS In this example the customer is importing 200 TB of data into Amazon S3 they will need to create three Import jobs of 80 TB Snowball Edge Storage Optimized capacity 3 After receiving the Snowball appliance the customer performs the following tasks a Customer connects the powered off appliance to their internal network and uses the supplied power cables to connect to a power outlet b After the Snowball is ready the customer uses the E Ink display to choose the network settings and assign an IP address to the appliance 4 The customer transfers the data to the Snowball appliance using the following steps a Download the credentials consisting of a manifest file and an unlock code for a specific Snowball job from AWS Snow Family Management Console b Install the Snowball Client on an on premises machine to manage the flow of data from the on premise s data source to the Snowball c Access the Snowball client using the terminal or command prompt on the workstation and typing the following command: snowball Edge unlockdevice endpoint [https:// Snowball IP Address] manifest [Path/to/manifest/file] –unlockcode [29 character unlock code] d Begin transferring data onto the Snowball using the following tools: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 20 i Version 11614 or earlier of the AWS CLI s3 cp or s3 sync commands Detailed installation and command syntax are found here ii AWS OpsHub which was installed in step 1f Detailed commands and instructions on managing S3 Storage can be found here 5 After the data transfer is complete disconnect the S nowball from your network and seal the Snowball After being properly sealed the return shipping label appears on the E Ink display Arrange UPS pickup of the appliance for shipment back to AWS 6 UPS automatically report s back a tracking number for the job to the AWS Snowball Management Console The customer can access that tracking number and a link to the UPS tracking website by viewing the job's status details in the console 7 After the appliance is received at the AWS Region the job status changes from In transit to AWS to At AWS On average it takes a day for data import into Amazon S3 to begin When the import starts the status of the job changes to Importing From this point on it takes an average of two business days for your import to reach Comp leted status You can track status changes through the AWS Snowball Management Console or by Amazon SNS notifications This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 21 Use Case 2: Continuous On premises Data Migration Figure 3 Ongoing data migration from onpremises storage solution In use case 2 a customer has a hybrid cloud deployment with data being used by both an on premises environment and systems deployed in AWS Additionally the customer wants a dedicated connection to AWS that provides consisten t network performance As part of the on going data migration AWS Direct Connect acts as the backbone providing a dedicated connection that bypasses the Internet to connect to AWS cloud Additionally the customer deploys AWS Storage Gateway with Gateway Cached Volume s in the data center which sends data to an Amazon S3 bucket in their target AWS region The following steps describe the required steps to build this solution: e The customer creates an AWS Direct Connect connection between their corporate data center and the AWS Cloud This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 22 a To set up the connection using the Connection Wizard ordering type the customer provides the following information using the AWS Direct Connect Console : i Choose a resiliency level 1 Maximum Resiliency (for critical workloads) : You can achieve maximum resiliency for critical workloads by using separat e connections that terminate on separate devices in more than one location This topology provides resiliency against device connectivity and complete location failures 2 High Resiliency (for critical workloads): You can achieve high resiliency for critic al workloads by using two independent connections to multiple locations This topology provides resiliency against connectivity failures caused by a fiber cut or a device failure It also helps prevent a complete location failure 3 Development and Test (non critical or test/dev workloads): You can achieve development and test resiliency for non critical workloads by using separate connections that terminate on separate devices in one location This topology provides resiliency agains t device failure but does not provide resiliency against location failure ii Enter connection settings: 1 Bandwidth – choose from 1Gbps to 100Gbps 2 First location – the first physical location for your first Direct Connect connection 3 First location service provider 4 Second location – the second physical location for your second Direct Connect connection 5 Second location service provider iii Review and create menu : confirm your selections and click create b After the customer creates a connection using the AWS Direct Connect console AWS will send an email within 72 hours The email will include a This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 23 Letter of Authorization and Connecting Facility Assignment (LOA CFA) After receiv ing the LOA CFA the customer will forward it to their network provider so they can order a cross connect for the customer The customer is not able to order a cross connect for themselves in the AWS Direct Connect location if the customer does not already have equipment there The network provider will have to do this for the custome r c After the physical connection is set up the customer create s the virtual interface s within AWS Direct Connect to connect to AWS public services such Amazon S3 d After creating virtual interface s the customer runs the AWS Direct Connect failover test to make sure that traffic routes to alternate online virtual interfaces 2 After the AWS Direct Connect connection is setup the customer create s an Amazon S3 bucket into which the on premises data can be backed up 3 The customer deploys the AWS Storage Gateway in their existing data center using following steps : a Deploy a new gateway using AWS Storage Gateway console b Select Volume Gateway Cached volumes for the type of gateway c Download the gateway virtual machine (VM) image and deploy on the on premis es virtualization environment d Provision two local disks to be attached to the VM e After the gateway VM is powered on record the IP address of the machine and then enter the IP address in the AWS Storage Gateway console to activate the gateway 4 After the gateway is activated the customer can configure the volume gateway in the AWS Storage Gateway console: a Configure the local storage by selecting one of the two local disks attached to the storage gateway VM to be used as the upload buffer and cache storage This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 24 b Create volumes on the Amazon S3 bucket 5 The customer connects the Amazon S3 gateway volume as an iSCSI connection through the storage gateway IP address on a client machine 6 After setup is completed and the customer applications write data to t he storage volumes in AWS the gateway at first stores the data on the on premises disks (referred to as cache storage ) before uploading the data to Amazon S3 The cache storage acts as the on premises durable store for data that is waiting to upload to Am azon S3 from the upload buffer The cache storage also lets the gateway store the customer application's recently accessed data on premises for lowlatency access If an application requests data the gateway first checks the cache storage for the data bef ore checking Amazon S3 To prepare for upload to Amazon S3 the gateway also stores incoming data in a staging area referred to as an upload buffer Storage G ateway uploads this buffer data over an encrypted Secure Sockets Layer (SSL) connection to AWS w here it is stored encrypted in Amazon S3 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Mi gration Services 25 Use Case 3: Continuous Streaming Data Ingestion Figure 4 Continuous streaming data ingestion In use case 3 the customer wants to ingest a social media feed continuously in Amazon S3 As part of the continuous data migration the customer uses Amazon Kinesis Data Firehose to ingest data without having to provision a dedicated set of servers 1 The c ustomer creates an Amazon Kinesis Data Firehose Delivery Stream using the following steps in the Amazon Kinesis Data Firehose console : a Choose the Delivery Stream name b Choose the Amazon S3 bucket; c hoose the IAM role that grants Firehose access to Amazon S3 bucket c Firehose buffers incoming records before delivering the data to Amazon S3 The customer chooses Buffer Size (1 128 MBs) or Buffer Interval (60 900 seconds) Whichever condition is satisfie d first triggers the data delivery to Amazon S3 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 26 d The customer chooses from three compression formats (GZIP ZIP or SNAPPY) or no data compression e The customer chooses whether to encrypt the data or not with a key from the list of AWS Key Management S ervice (AWS KMS) keys that they own 2 The customer sends the streaming data to an Amazon Kinesis Firehose delivery stream by writing appropriate code using AWS SDK Conclusion This whitepaper walked you through different AWS managed and selfmanaged storage migration options Additionally the pap er covered different use cases showing how multiple storage services can be used together to solve different migration needs Contributors Contributors to this document include: • Shruti Worlikar Solutions Architect Amazon Web Services • Kevin Fernandez Sr Solutions Architect Amazon Web Services • Scott Wainner Sr Solutions Architect Amazon Web Services Further Reading For additional information see : • AWS Direct Connect • AWS Snow Family • AWS Storage Gateway • AWS Kinesis Data Firehose • Storage Partner Solutions This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 27 Document revisions Date Description July 13 2021 Repaired broken links Updated Time/Performance characteristics Added decision tree Added AWS Transfer Family Updated with new AWS Snow Family services Updated procedures in use cases May 2016 First publication
General
A_Practical_Guide_to_Cloud_Migration_Migrating_Services_to_AWS
Archived A Practical Gui de to Cl oud Migration Migratin g Service s to AWS December 2015 This paper has been archived For the latest technical content see: https://docsawsamazoncom/prescriptiveguidance/latest/mrpsolution/mrpsolutionpdfArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 2 of 13 © 2015 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice C ustomers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document do es not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this docum ent is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 3 of 13 Contents Abstract 3 Introduction 4 AWS Cloud Adoption Framework 4 Manageable Areas of Focus 4 Successful Migrations 5 Breaking Down the Economics 6 Understand OnPremises Costs 6 Migration Cost Considerations 8 Migration Options 10 Conclusion 12 Further Reading 13 Contributors 13 Abstract To achieve full benefits of moving applications to the Amazon Web Services (AWS) platform it is critical to design a cloud migration model that delivers optimal cost efficiency This includes establishing a compelling business case acquiring new skills within the IT organization implemen ting new business processes and defining the application migration methodology to transform your business model from a traditional on premises computing platform to a cloud infrastructure ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 4 of 13 Perspective Areas of Focus Introduction Cloudbased computing introduces a radical shift in how technology is obtained used and managed as well as how organizations budget and pay for technology services With the AWS cloud platform project teams can easily configure the virtual network using t heir AWS account to launch new computing environments in a matter of minutes Organizations can optimize spending with the ability to quickly reconfigure the computing environment to adapt to changing business requirements Capacity can be automatically sc aled —up or down —to meet fluctuating usage patterns Services can be temporarily taken offline or shut down permanently as business demands dictate In addition with pay peruse billing AWS services become an operational expense rather than a capital expense AWS Cloud Adoption Framework Each organization will experience a unique cloud adoption journey but benefit from a structured framework that guides them through the process of transforming their people processes and technology The AWS Cloud Adopt ion Framework (AWS CAF) offers structure to help organizations develop an efficient and effective plan for their cloud adoption journey Guidance and best practices prescribed within the framework can help you build a comprehensive approach to cloud comput ing across your organization throughout your IT lifecycle Manageable Areas of Focus The AWS CAF breaks down the complicated planning process into manageable areas of focus Perspectives represent top level areas of focus spanning people process and te chnology Components identify specific aspects within each Perspective that require attention while Activities provide prescriptive guidance to help build actionable plans The AWS Cloud Adoption Framework is flexible and adaptable allowing organizations to use Perspectives Components and Activities as building blocks for their unique journey Business Perspective Focuses on identifying measuring and creating business value using technology services The Components and Activities within the Business Perspective can help you develop a business case for cloud align ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 5 of 13 business and technology strategy and support stakeholder engagement Platform Perspective Focuses on describing the structure and relationship of technology elements and services in complex IT environments Components and Activities within the Perspective can help you develop conceptual and functional models of your IT environment Maturity Perspective Focuses on defining the target state of an organization's capabilities measuring maturity and optimizing resources Components within Maturity Perspective can help assess the organization's maturity level develop a heat map to prioritize initiatives and sequence initiatives to develop the roadm ap for execution People Perspective Focuses on organizational capacity capability and change management functions required to implement change throughout the organization Components and Activities in the Perspective assist with defining capability and skill requirements assessing current organizational state acquiring necessary skills and organizational re alignment Process Perspective Focuses on managing portfolios programs and proj ects to deliver expected business outcome on time and within budget while keeping risks at acceptable levels Operations Perspective Focuses on enabling the ongoing operation of IT environments Components and Activities guide operating procedures service management change management and recovery Security Perspective Focuse s on helping organizations achieve risk management and compliance goals with guidance enabling rigorous methods to describe structure of security and compliance processes systems and personnel Components and Activities assist with assessment control selection and compliance validation with DevSecOps principles and automation Successful Migrations The path to the cloud is a journey to business results AWS has helped hundreds of customers achieve their business goals at every stage of their journey While every organization’s path will be unique there are common patterns approaches and best pract ices that can be implemented to streamline the process 1 Define your approach to cloud computing from business case to strategy to change management to technology 2 Build a solid foundation for your enterprise workloads on AWS by assessing and validating yo ur application portfolio and integrating your unique IT environment with solutions based on AWS cloud services Perspective Areas of Focus ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 6 of 13 3 Design and optimize your business applications to be cloud aware taking direct advantage of the benefits of AWS services 4 Meet your internal and external compliance requirements by developing and implementing automated security policies and controls based on proven validated designs Early planning communication and buy in are essential Understanding the forcing function (tim e cost availability etc) is key and will be different for each organization When defining the migration model organizations must have a clear strategy map out a realistic project timeline and limit the number of variables and dependencies for trans itioning on premises applications to the cloud Throughout the project build momentum with key constituents with regular meetings and reporting to review progress and status of the migration project to keep people enthused while also setting realistic ex pectations about the availability timeframe Breaking Down the Economics Understand On Premises Costs Having a clear understanding of your current costs is an important first step of your journey This provides the baseline for defining the migration model that delivers optimal cost efficiency Onpremises data centers have costs associated with the servers storage networking power cooling physical space and IT labor required to support applications and services running in the production environment Although many of these costs will be eliminated or reduced after applications and infrastructure are moved to the AWS platform knowing your current run rate will help determine which applications are good candidates to move to AWS which applications need to be rewrit ten to benefit from cloud efficiencies and which applications should be retired The following questions should be evaluated when calculating the cost of on premises computing: Understanding Costs To build a migration model for optimal efficiency it is important to accurately understand the current costs of running onpremises applications as well as the interim costs incurred during the transition ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 7 of 13 “Georgetown’s modernization strategy is not just about upgrading old systems; it is about changing the way we do business building new partnerships with the community and working to embrace innovation Cloud has been an important component of this Although we thought the primary driver would be cost savings we have found that agility innovation and the opportuni ty to change paths is where the true value of the cloud has impacted our environment “Traditional IT models with heavy customization and sunk costs in capital infrastructures —where 90% of spend is just to keep the trains running —does not give you the opp ortunity to keep up and grow” Beth Ann Bergsmark Interim Deputy CIO and AVP Chief Enterprise Architect Georgetown University  Labor How much do you spend on maintaining your environment (broken disks patching hosts servers going offline etc)?  Network How much bandwidth do you need? What is your bandwidth peak to average ratio? What are you assuming for network gear? What if you need to scale beyond a single rack?  Capacity What is the cost of over provisioning for peak capacity? How do you plan for capacity? How much buffer capacity are you planning on carrying? If small what is your plan if you need to add more? What if you need less capacity? What is your plan to be abl e to scale down costs? How many servers have you added in the past year? Anticipating next year?  Availability / Power Do you have a disaster recovery (DR) facility? What was your power utility bill for your data center(s) last year? Have you budgeted for both average and peak power requirements? Do you have separate costs for cooling/ HVAC? Are you accounting for 2N power? If not what happens when you have a power issue to your rack?  Servers What is your average server utilization? How much do you overpr ovision for peak load? What is the cost of over provisioning?  Space Will you run out of data center space? When is your lease up? ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 8 of 13 Migration Cost Considerations To achieve the maximum benefits of adopting the AWS cloud platform new work pract ices that drive efficiency and agility will need to be implemented:  IT staff will need to acquire new skills  New business processes will need to be defined  Existing business processes will need to be modified Migration Bubble AWS uses the term “migration bubble” to describe the time and cost of moving applications and infrastructure from on premises data centers to the AWS platform Although the cloud can provide significant savings costs may increase as you move into the migration bubble It i s important to plan the migration to coincide with hardware retirement license and maintenance expiration and other opportunities to reduce cost The savings and cost avoidance associated with a full all in migration to AWS will allow you to fund the mig ration bubble and even shorten the duration by applying more resources when appropriate Time Figure 1: Migration Bubble Level of Effort The cost of migration has many levers that can be pulled in order to speed up or slow down the process including labor process tooling consulting and technology Each of these has a corresponding cost associated with it based on the level of effort required to move the application to the AWS platform Migration Bubble Planning • • • • • • Planning and Assessment Duplicate Environments Staff Training Migration Consulting 3rd Party Tooling Lease Penalties Operation and Optimization Cost of Migration $ ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 9 of 13 To calculate a realistic total cost of ownership (TCO) you need to understand what these costs are and plan for them Cost considerations include items such as:  Labor During the transition existing staff will need to continue to maintain the production environment learn new skills and decommission the old infrastructure once the migration is complete Additional labor costs in the migration bubble include:  Staff time to plan and assess project scope and project plan to migrate applications and infrastructure  Retaining consulting partners with the expertise to streamline migration of applications and infrastructure as well as training staff with new skills  Due to the general lack of cloud experience for most organization s it is necessary to bring in outside consulting support to help guide the process  Process Penalty fees associated with early termination of contracts may be incurred (facilities software licenses etc) once applications or infrastructure are decommissioned  The cost of tooling to automate the migration of data and virtual machines from on premises to AWS  Technology Duplicate environments will be required to keep production applications/infrastructure available while transitioning to the AWS platform Cost considerations include:  Cost to maintain production environment during migration  Cost of AWS platform comp onents to run new cloud based applications  Licensing of automated migration tools license to accelerate the migration process ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 10 of 13 “I wanted to move to a model where we can deliver more to our citizens and r educe the cost of delivering those services to them I wanted a product line that has the ability to scale and grow with my department AWS was an easy fit for us and the way we do business” Chris Chiancone CIO City of McKinney City of McKinney City of McKinney Texas Turns to AWS to Deliver More Advanced Services for Less Money The City of McKinney Texas about 15 miles north of Dallas and home to 155000 people was ranked the No 1 Best Place to live in 2014 by Money Magazine The city’s IT department is going all in on AWS and uses the platform to run a wide range of services and applications such as its land management and records management systems By using AWS the city’s IT department can focus on delivering new and better services for its fast growing population and city employees instead of spending resources buying and maintaining IT infrastructure City of McKinney chose AWS for our ability to scale and grow with the needs of the city’s IT department AWS provides an easy fit for the way the city does business Without having to own the infrastructure the C ity of McKinney has the ability to use cloud resources to address business needs By moving from a CapEx to an OpEx model they can now return funds to critical city projects Migration Options Once y ou understand the current costs of an on premises production system the next step is to identify applications that will benefit from cloud cost and efficiencies Applications are either critical or strategic If they do not fit into either category they should be taken off the priority list Instead categorize these as legacy applications and determine if they need to be replaced or in some cases eliminated Figure 2 illustrates decision points that should be considered in ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 11 of 13 “A university is really a small city with departments running about 1000 diverse small services across at the university We made the decision to go down the cloud journey and have been working with AWS for the past 4 years In building our business case we wanted the ability to give our customers flexible IT services th at were cost neutral “We embraced a cloud first strategy with all new services a built in the cloud In parallel we are migrating legacy services to the AWS platform with the goal of moving 80% of these applications by the end of 2017” Mike Chapple P hD Senior Director IT Services Delivery University of Notre Dame selecting applications to move to the AWS platform focusing on the “6 Rs” — retire retain re host re platform re purchase and re factor Decommission Refactor for AWS Rebuild Application Architecture AWS VM Import Org/Ops Change Do Not Move Move the App Infrastructure Design Build AWS Lift and Shift (Minimal Change) Determine Migration 3rd Party Tools Impact Analysis Management Plan Identify Environment Process Manually Move App and Data Ops Changes Migration and UAT Testing Signoff Operate Discover Assess (Enterprise Architecture and Determine Migration Path Application Lift and Shift Determine Migration Process Plan Migration and Sequencing 3rd Party Migration Tool Tuning Cutover Applications) Vendor S/PaaS (if available) Move the Application Refactor for AWS Recode App Components Manually Move App and Data Architect AWS Environment Replatform (typically legacy applications) Rearchitect Application Recode Application and Deploy App Migrate Data Figure 2: Migration Options Applications that deliver increased ROI through reduced operation costs or deliver increased business results should be at the top of the priority list Then you can determine the best migration path for each workload to optimize cost in the migration process ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 12 of 13 Conclusion Many organizations are extending or moving their business applications to AWS to simplify infrastructure management deploy quicker provide greater availability increase agility allow for faster innovation and lower cost Having a clear understanding of existing infrastructure costs the components of your migration bubble and their corresponding costs and projected savings will help you calculate payback time and projected ROI With a long history in enabling enterprises to successfully adopt cloud computing Amazon Web Services delivers a mature set of services specifically designed for the unique security compliance privacy and governance requirements of large organizations With a technology platform that is both broad and deep Professional Services and Support organizations robust training programs and an ecosystem tens ofthousands strong AWS can help you move faster and do more With AWS you can:  Take advantage of more services storage options and security controls than any other cloud platform  Deliver on stringent standards with the broadest set of certifications accreditations and controls in the industry  Get deep assistance with our global cloud focused enterprise professional services support and training teams ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 13 of 13 Further Reading For additional help please consult the following sources:  The AWS Cloud Adoption Framework http://d0awsstaticcom/whitepapers/aws_cloud_adoption_frameworkp df Contributors The following individuals and organizations contributed to this document:  Blake Chism Practice Manager AWS Public Sector Sales Var  Carina Veksler Public Sector Solutions AWS Public Sector Sales Var
General
Database_Caching_Strategies_Using_Redis
ArchivedDatabase Caching Strategies Using Redis May 2017 This paper has been archived For the latest technical content see https://docsawsamazoncom/whitepapers/latest/database cachingstrategiesusingredis/welcomehtmlArchived Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2017 Amazon Web Services Inc or its affiliates All rights reserved Archived Contents Database Challenges 1 Types of Database Caching 1 Cach ing Patterns 3 Cache Aside (Lazy Loading) 4 Write Through 5 Cache Validity 6 Evictions 7 Amazon ElastiCache and Self Managed Redis 8 Relational Database Caching Techniques 9 Cache the Database SQL ResultSet 10 Cache Select Fields and Values in a Custom Format 13 Cache Select Fields and Values into an Aggregate Redis Data Structure 14 Cache Serialized Applicati on Object Entities 15 Conclusion 17 Contributors 17 Further Reading 17 Archived Abstract Inmemory data caching can be one of the most effective strategies for improving your overall application performance and reducing your database costs You can apply c aching to any type of database including relational databases such as Amazon Relational Database Service (Amazon RDS) or NoSQL databases such as Amazon DynamoDB MongoDB and Apache Cassandra The best part of caching is that it’s easy to implement and it dramatically improves the speed and scalability of your application This w hitepaper describes some of the caching strategies and implementation approaches that address the limitations and challenges associated with disk based databases ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 1 Database Challenges When you’re building distributed applications that require low latency and scalability disk based databases can pose a number of challenges A few common ones include the following : • Slow processing queries: There are a number of query optimization techniques and schema designs that help boost query performance However the data retrieval speed from disk plus the added query processing times generally put your query response times in double digit millis econd speeds at best This assumes that you ha ve a steady load and your da tabase is performing optimally • Cost to scale: Whether the data is distributed in a disk based NoSQL database or vertically scaled up in a relational database scaling for extremely high reads can be costly It also can require several database read replicas to match what a single in memory cache node can deliver in terms of requests per second • The need to simplify data access: Although relational databases provide an excellent means to data model relationships they aren’t optimal for data access There are instances where your applications may want to access the data in a particular structure or view to simplify data retrieval and increase application performance Before implementing database caching many architects and engine ers spend great effort trying to extract as much performance as they can from their database s However there is a limit to the performance that you can achieve with a disk based database and it’s counterproductive to try to solve a problem with the wrong tools For example a large portion of the latency of your database query is dictated by the physics of retrieving data from disk Types of Database Caching A database cache supplements your primary database by removing unnecessary pressure on it typically in the form of frequently accessed read data The cache itself can live in several areas including in your database in the applic ation or as a standalon e layer The following are the three most common types of database caches: ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 2 • Database integrated caches: Some databases such as Amazon Aurora offer an integrated cache that is managed within the database engine and has built in write through capabilities1 The database updates its cache automatically when the underlying data changes Nothing in the application tier is required to use this cache The downside of integrated caches is their size and capabilities Integrated caches are typically limited to the available memory that is allocated to the cache by the database instance and can’t be used for other purposes such as sha ring data with other instances • Local caches: A local cache stores your frequently used data within your application This makes data retrieval faster than other caching architectures because it removes network traffic that is associated with retrieving data A major disadvantage is that amo ng your applications each node has its own resident cache working in a disconnected manner The information that is stored in an individual cache node whether it ’s cached database rows web content or session data can’t be shared with other local cache s This creates challenges in a distributed environment where information sharing is critical to support scalable dynamic environments Because most applications use multiple application servers coordinating the values across them becomes a major challenge if each server has its own cache In addition when outages occur the data in the local cache is lost and must be rehydrated which effectively negat es the cache The majority of these disadv antages are mitigated with remote caches • Remote caches: A remote cache (or “side cache”) is a separate instance (or instances) dedicated for sto ring the cached data in memory Remote caches are stored on dedicated servers and are typically built on key/va lue NoSQL stores such as Redis2 and Memcached 3 They provide hundreds of thousands and up to a million requests per second per cache node Many solutions such as Amazon ElastiCache for Redis also provide the high availability need ed for critical workloads4 ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 3 The average latency of a request to a remote cache is on the sub millisecond timescale which is orders of magnitude faster than a request to a diskbased database At these spe eds local caches are seldom necessary Remote caches are ideal for distributed environment s because they work as a connected cluster that all your disparate systems can use Howev er when network latency is a concern you can apply a two tier caching strategy that uses a local and remote cache together This paper doesn’t describe this strategy in detail but it’s typically used only when needed because of the complexity it adds With remote caches the orchestration between caching the data and managing the validity of the data is managed by your applications and/or processes that use it The cache itself is not directly connected to the database but is used adjacently to it The remainder of this paper focus es on using remote caches and specifically Amazon ElastiCache for Redis for caching relational database data Caching Patterns When you are caching data from your database t here are caching patterns for Redis5 and Memcached6 that you can implement including proactive and reactive approaches Th e patterns you choose to implement should be directly related to your caching and application objectives Two common approaches are cache aside or lazy loading (a reactive approach) and write through (a proactive approach) A cache aside cache is updated after the data is requested A writethrough cache is updated immediately when the primary database is updated With both approaches the application is essentia lly managing what data is being cached and for how long The following diagram is a typical representation of an architecture that uses a remote distributed cache ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 4 Figure 1: Architecture using remote distributed cache Cache Aside (Lazy Loading) A cache aside cache is the most common caching strategy available The fundamental data retrieval logic can be summarized as fo llows: 1 When your application needs to read data from the database it checks the cache first to determine whether the data is available 2 If the data is available (a cache hit) the cached data is returned and the response is issued to the caller 3 If the data isn’t available (a cache miss) the database is queried for the data The cache is then populated with the data that is retrieved from the database and the data is returned to the caller Figure 2: A cache aside cache This approach has a couple of advantages: ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 5 • The cach e contains only data that the application actually requests which helps keep the cache size cost effective • Implementing this approach is straightforward and produces immediat e performance gains whether you use an application framework that encapsulates lazy caching or your own custom application logic A disadvantage when using cache aside as the only caching pattern is that because the data is loaded into the cache only after a cache miss some overhead is added to the initial response time because additional roundtrips to the cache and database are needed Write Through A write through cache reverses the order of how the cache is populated Instead of lazyloading the data in the cache after a cache miss the cac he is proactively updated immediately following the primary database update The fundamental data retrieval logic can be summarized as follows : 1 The a pplication batch or backend proces s updates the primary database 2 Immediately afterward the dat a is also updated in the cache Figure 3: A write through cache The write through pattern is almost always implemented along with lazy loading If the application gets a cache miss because the data is not present or has expired the lazy loading pattern is performed to update the cache The write through approach has a couple of advantages: ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 6 • Because the cache is uptodate with the primary database there is a much greater likelihood that the data will be found in the cache This in turn result s in better overall application performance and user experience • The performance of your d atabase is optimal because fewer database reads are performed A disadvantag e of the write through approach is that infrequently requested data is also written to the cache resulting in a larger and more expensive cache A proper caching strategy includes effective use of both write through and lazy loading of your data and setting an appropriate expiration for the data to keep it relevant and lean Cache Validity You can control the freshness of your cached data by applying a time to live (TTL) or “expiration” to your cached keys After the set time has passed the key is deleted from the cache and access to the origin data store is required along with reaching the updated data Two principles can help you determine the appropriate TTLs to appl y and the type of caching patterns to implement First it’s important that you understand the rate of change of the underlying data Second it’s important that you evaluate the risk of outdated data being returned back to your application instead of its updated counterpart For example it might make sense to keep static or reference data (that is data that is seldom updated ) valid for longer periods of time with write throughs to the cache when th e underlying data gets updated With dynamic data that changes often you might want to apply lower TTLs that expire the data at a rate of change that matches that of the primary dat abase This lowers the risk of returning outdated data while still providing a buffe r to offload database requests It’s also important to recognize that even if you are only caching data for minutes or seconds versus longer durations appropriately apply ing TTLs to ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 7 your cached keys can result in a huge performance boost and an overall better user experience with your application Another best practice when applying TTLs to your cache keys is to add some time jitter to your TTLs This reduces the possibili ty of heavy database load occurring when your cached data expires Take for example the scenario of caching product information If all your product data expires at the same time and your application is under heavy load then your backend database has to fulfill all the product requests Depending on the load that could generate too much pressure on your database resulting in poor performance By adding slight jitter to your TTLs a random ly generated time value (eg TTL = your initial TTL value in seconds + jitter) would reduce th e pressure on your backend database and also reduce the CPU use on your cache engine as a result of deleting expired keys Evictions Evictions occur when cache memory is overfilled or is greater than the maxmemory setting for the cache causing the engine selecting keys to evict in order to manage its memory The keys that are chosen are based on the eviction policy you select By default Amazon ElastiCache for Redis sets the volatile lru eviction policy to your Redis c luster When t his policy is select ed the least recently used keys that have an expiration (TTL) value set are evicted Other eviction policies are available and can be applied in the config urable maxmemory policy parameter The following table summarizes e viction policies: Eviction Policy Description allkeys lru The cache evicts the least recently used (LRU) keys regardless of TTL set allkeys lfu The cache evicts the least frequently used (LFU) keys regardless of TTL set volatile lru The cache evicts the least recently used (LRU) keys from those that have a TTL set ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 8 Eviction Policy Description volatile lfu The cache evicts the least frequently used (LFU) keys from those th at have a TTL set volatile ttl The cache evicts the keys with the shortest TTL set volatile random The cache randomly evicts keys with a TTL set allkeys random The cache randomly evicts keys regardless of TTL set noeviction The cache doesn’t evict keys at all This blocks future writes until memory frees up A good strategy in selecting an appropriate eviction policy is to consider the data stored in your cluster and the outcome of keys being evicted Generally least recently used ( LRU)based policies are more common for basic caching use cases However depending on your objectives you might want to use a TTL or random based eviction policy that better suits your requirements Also if you are experiencing evictions with your cluster it is usually a sign that you should scale up (that is use a node with a larger memory footprint ) or scale out (that is add more nodes to your cluster) to accommodate the additional data An exce ption to this rule is if you are purposefully relying on the cache engine to manage your keys by means of eviction also referred to an LRU cache 7 Amazon ElastiCache and Self Managed Redis Redis is an open source inmemory data store that has become the most popular key/value engine in the market Much of its popularity is due to its support for a variety of data structures as well as other features including Lua scripting support8 and Pub/Sub messaging capability Other added benefits include high availab ility topologies with support for read replicas and the ability to persist data Amazon ElastiCache offers a fully manage d service for Redis This means that all the administrative tasks associated with managing your Redis cluster including monitoring patching backups and automatic failover are managed ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 9 by Amazon This lets you focus on your business and your data instea d of your operations Other benefits of using Amazon ElastiCache for Redis over self managing your cache environment include the following : • An enhanced Redis engine that is fully compatible with the open source version but that also provides added stabilit y and robustness • Easily modifiable parameters such as eviction policies buffer limits etc • Ability to scale and resize your cluster to terabytes of data • Hardened security that lets you isolate your cluster within Amazon Virtual Private Cloud (Amazon VPC)9 For more information about Redis or Amazon ElastiCache see the Further Reading section at the end of this whitepaper Relational Da tabase Caching Techniques Many of the caching techniques that are described in this section can be applied to any type of database However this paper focuses on relational databases because they are the most common database caching use case The basic paradigm when you query data from a relational database includes executing SQL statements and iterating over the returned ResultSet object cursor to retrieve the database rows There are several techniques you can apply when you want to cache the returned data However it’s best to choose a method that simplifies your data access pattern and/or optimizes the architectur al goals that you have for your application To visualize this we’ll examine snippets of Java code to explain the logic You can find additional information on the AWS caching site10 The examples use the Jedis Redis client library11 for connecting to Redis although you can use any Java Redis library including Lettuce12 and Redisson 13 Assume that you issued the following SQL statement against a customer database for CUSTOMER_ID 1001 We’ll examine the various cachi ng strategies that you can use SELECT FIRST_NAME LAST_NAME EMAIL CITY STATE ADDRESS COUNTRY FROM CUSTOMERS WHERE CUSTOMER_ID = “1001”; ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 10 The query returns this record: … Statement stmt = connectioncreateStatement(); ResultSet rs = stmtexecuteQuery(query); while (rsnext()) { Customer customer = new Customer(); customersetFirstName(rsgetString("FIRST_NAME")); customersetLastName(rsgetString("LAST_NAME")); and so on … } … Iterating over the ResultSet cursor lets you retrieve the fields and values from the database rows From that point the application can choose where and how to use that data Let’s also assume that your application framework can ’t be used to abstract your caching implementation How do you best cac he the returned database data? Given this scenario you have many options The following sections evaluate some options with focus on the caching logic Cache the Database SQL ResultSet Cache a serialized ResultSet object that conta ins the fetched database row • Pro: When data retrieval logic is abstracted (eg as in a Data Access Object14 or DAO layer) the consuming code expects only a ResultSet object and does not need to be made aware of its origination A ResultSet object can be iterated over r egardless of w hether it originated from the database or was deserialized from the cache which greatly reduc es integration logic This pattern can be appli ed to any relational database • Con: Data retrieval still requires extracting values from the ResultSet object cursor and does not further simplify data access; it only reduces data retrieval latency ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 11 Note : When you cach e the row it’s important that it’s serializable The following example uses a CachedRowSet implementation for this purpose When you are using Redis this is stored as a byte array value The following code converts the CachedRowSet object into a byte arra y and then stores that byte array as a Redis byte array value The actual SQL statement is stored as the key and converted into bytes … // rs contains the ResultSet key contains the SQL statement if (rs != null) { //lets write through to the cache CachedRowSet cachedRowSet = new CachedRowSetImpl(); cachedRowSetpopulate(rs 1); ByteArrayOutputStream bos = new ByteArrayOutputStream(); ObjectOutput out = new ObjectOutputStream(bos); outwriteObject(cachedRowSet); byte[] red isRSValue = bos toByteArray(); jedis set(keygetBytes() redisRSValue); jedis expire(keygetBytes() ttl); } … The nice thing about storing the SQL statement as the key is that it enable s a transparent caching abstraction layer that hides the implementation details The other added benefit is that you don’t need to create any additional mappings between a custom key ID and the executed SQL statement The last statement executes an expire command to apply a TTL to the stored key This code follows our write through logic in that upon querying the database the cached value is stored immediately afterward For lazy caching you would initially query the cache before executing the query again st the database To hide the implementation details use the DAO pattern and expose a generic method for your application to retrieve the data For example because your key is the actual SQL statement your method signature could look like the following: public ResultSet getResultSet(String key); // key is sql statement ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 12 The code that calls (consum es) this method expects only a ResultSet object regardless of what the underlying implementation details are for the interface Under the hood the getResultSet method execute s a GET command for the SQL key which if present is deserialize d and convert ed into a ResultSet object public ResultSet getResultSet(String key) { byte [] redisResultSet = null ; redisResultSet = jedis get(keygetBytes()); ResultSet rs = null ; if (redisResultSet != null ) { // if cached value exists deserialize it and return it try { cachedRowSet = new CachedRowSetImpl(); ByteArrayInputStream bis = new ByteArrayInputStream(redisResultSe t); ObjectInput in = new ObjectInputStream(bis); cachedRowSetpopulate((CachedRowSet) inreadObject()); rs = cachedRowSet; } … } else { // get the ResultSet from the database store it in the rs object then cache it … } … return rs; } If the data is not present in the cache query the database for it and cache it before returning As mentioned earlier a best practice would be to apply an appropriate TTL on the keys as well For all other caching techniques that we’ll review you should establish a naming convention for your Redis keys A good naming convention is one that is easily predictable to applications and developers A hierarchical structure separated by colons is a common naming convention for keys such as object:type:id ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 13 Cache Select Fields and Values in a Custom Format Cache a subset of a fetched database row into a cust om structure that can b e consumed by your applications • Pro: This approach is easy to implement You essentially store specific retrieved fields and values into a structure such as JSON or XML and then SET that structure into a Redis string The format you choose should be something that conforms to your application ’s data access pattern • Con: Your application is using different types of objects when querying for particular data ( eg Redis string and database results) In addition you are required to parse through the entire structure to retrieve the individual attributes associated with it The following code stores specific customer attributes in a customer JSON object and caches that JSON object into a Redis string : … // rs contains the ResultSet while (rsnext()) { Customer customer = new Customer(); Gson gson = new Gson(); JsonObject customerJSON = new JsonObject(); customersetFirstName(rsgetString("FIRST_NAME")); customerJSONadd(“first_name” gsontoJsonTree(customergetFirstName() ); customersetLastName(rsgetStri ng("LAST_NAME")); customerJSONadd(“last_name” gsontoJsonTree(customergetLastName() ); and so on … jedisset(customer:id:"+customergetCustomerID() customerJSONtoString() ); } … For data retrieval you can implement a generic method through an interface that accepts a customer key (eg customer:id:1001) and a n SQL statement ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 14 string argument It will also return whatever structure your application requires (eg JSON XML) and abstract the underlying det ails Upon initial request the application execute s a GET command on the customer key and if the value is present return s it and complete s the call If the value is not present it queries the database for the record write sthrough a JSON representation of the data to the cache and return s Cache Select Fields and Values into an Aggregate Redis Data Structure Cache the fetched database row into a specific data structure that can simplif y the application ’s data access • Pro: When converting the ResultSet object into a format that simplifies access such as a Redis Hash your application is able to use that data more effectively This technique simplifies your data access pattern by reducing the need to iterate over a ResultSet object or by parsing a structure like a JSON object stored in a string In addition working with aggregate data structures such as Redis Lists Sets and Hashes provide various attrib ute level commands associated with setting and getting data eliminating the overhead associated with processing the data before being able to leverage it • Con: Your application is using different t ypes of objects when querying for particular data ( eg Redis Hash and database results) The following code creates a HashMap object that is used to store the customer data The map is populated with the database data and SET into a Redis … // rs contai ns the ResultSet while (rsnext()) { Customer customer = new Customer(); Map<String String> map = new HashMap<String String>(); customersetFirstName(rsgetString("FIRST_NAME")); mapput("firstName" customergetFirstName()); customersetLastName(rsgetString("LAST_NAME")); mapput("lastName" customergetLastName()); and so on … ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 15 jedishmse t(customer:id:"+customergetCustomerID() map); } … For data retrieval you can implement a generic method through an interface that accepts a customer ID (the key) and a n SQL statement argument It return s a HashMap to the caller Just as in the other examples you can hide the details of where the map is originating from First your application can query the cache for the customer data using the customer ID key If the data is not present the SQL statement execute s and retrieve s the data from the dat abase Upon retrieval you may also store a hash representation of that customer ID to lazy load Unlike JSON the added benefit of storing your data as a hash in Redis is that you can query for individual attributes within it Say that for a given request you only want to respond with specific attributes associated with the customer Hash such as the customer name and address This flexibility is supported in Redis along with various other features such as adding and deleting individ ual attributes in a map Cache Serialized Application Object Entities Cache a subset of a fetched database row into a custom structure that can b e consumed by your applications • Pro: Use application objects in their native application state with simple serializing and deserializing techniques This can rapidly accelerate application performance by minimizing data transformation logic • Con: Advanced application development use case The following code converts the customer object into a byte array and then stores that value in Redis: … // key contains customer id Customer customer = (Customer) object; ByteArrayOutputStream bos = new ByteArrayOutputStream(); ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 16 ObjectOutput out = null ; try { out = new Object OutputStream(bos); outwriteObject(customer); outflush(); byte [] objectValue = bostoByteArray(); jedis set(keygetBytes() objectValue); jedis expire(keygetBytes() ttl); } … The key identifier is also stored as a byte representation and can be represented in the customer:id:1001 format As the other examples show you can create a generic method through an application interface that hides the underlying details method details In this example when instantiating an object or hydrating one with state the method accepts the customer ID (the key) and either returns a customer object from the cache or constructs one after querying the backend database First your application queries the cache for the serialized customer object using the customer ID If the data is not present the SQL statement execute s and the application consume s the data hydrate s the customer entity ob ject and then lazy load s the serialized representation of it in the cache public Customer getObject(String key) { Customer customer = null ; byte [] redisObject = null ; redisObject = jedis get(keygetBytes()); if (redisObject != null ) { try { ByteArrayInputStream in = new ByteArrayInputStream(redisObject); ObjectInputStream is = new ObjectInputStream(in); customer = (Customer) isreadObject(); } … ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 17 } … return customer; } Conclusion Modern applications can’t afford poor performance Today’s users have low tolerance for slow running applications and poor user experiences When low latency and scaling databases are critical to the success of your applications it’s imperative that you use database caching Amazon ElastiCache provides two managed in memory key value stores that you can use for database caching A managed service further simplifies using a cache in that it removes the administrative tasks associated with support ing it Contributors The following individuals and organizations contributed to this document: • Michael Labib Specialist Solutions Architect AWS Further Reading For more information see the following resources : • Performance at Scale with Amazon ElastiCache (AWS whitepaper)15 • Full Redis command list16 1 https://awsamazoncom/rds/aurora/ 2 https://redisio/download 3 https://memcachedorg/ 4 https://awsamazoncom/elasticache/redis/ 5 https://docsawsamazoncom/AmazonElastiCache/latest/red ug/Strategieshtml Notes ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 18 6 https://docsawsamazoncom/AmazonElastiCache/latest/mem ug/Strategieshtml 7 https://redisio/topics/lru cache 8 https://wwwluaorg/ 9 https://awsamazoncom/vpc/ 10 https://awsamazoncom/caching/ 11 https://githubcom/xetorthio/jedis 12 https://githubcom/wg/lettuce 13 https://githubcom/redisson/redisson 14 http://wwworaclecom/technetwork/java/dataaccessobject 138824html 15 https://d0awsstaticcom/whitepapers/performance atscale withamazon elasticachepdf 16 https://redisio/commands
General
Encrypting_File_Data_with_Amazon_Elastic_File_System
ArchivedEncrypting File Data with Amazon Elastic File System Encryption of Data at Rest and in Transit April 2018 This paper has been archived For the most recent version of this paper see https://docsawsamazoncom/whitepapers/latest/ efsencryptedfilesystems/efsencryptedfile systemshtmlArchived© 2018 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AW S agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedContents Introduction 1 Basic Concepts and Terminology 1 Encryption of Data at Rest 3 Managing Keys 3 Creating an Encrypted File System 4 Using an Encrypted File System 7 Enforcing Encryption of Data at Rest 7 Detecting Unencrypted File Systems 7 Encryption of Data in Transit 10 Setting up Encryption of Data in Transit 10 Using Encryption of Data in Transit 12 Conclusion 13 Contributors 13 Further Reading 13 Document Revisions 13 ArchivedAbstract In today’s world of cybercrime hacking attacks and the occasional security breach securing data has become increasingly important to organizations Government regulations and industry or company compliance policies may require data of different classifications to be secured by using proven encryption policies cryptographic algorithms and proper key management This paper outlines best practices for encrypting shared file systems on AWS using Amazon Elastic File System ( Amazon EFS) ArchivedAmazon Web Services – Encrypting File Data with Amazon Elastic File System Page 1 Introduction Amazon Elastic File System ( Amazon EFS)1 provides simple scalable highly available and highly durable shared file system s in the cloud The file systems you create using Amazon EFS are elastic allowing them to grow and shrink automatically as you add and remove data They can grow to petabytes in size distributing data across an unconstrained number of storage servers in multiple Availability Zones Data stored in these file systems can be encrypted at rest and in transit using Amazon EFS For encryption of data at re st you can create encrypted file systems through the AWS Management Console or the AWS Command Line Interface ( AWS CLI ) Or you can create encrypted file systems programmatically through the Amazon EFS API or one of the AWS SDK s Amazon EFS integrates with AWS Key Management Service ( AWS KMS)2 for key management You can also enable encryption of data in transit by mounting the file system and transferring all NFS traffic over an encrypted Transport Layer Security (TLS) tunnel This paper outlines best practices for encrypting shared file systems on AWS using Amazon EFS It describes how to enable encryption of data in transit at the client connection layer and how to create an encrypted file system in the AWS Management Console and in the AWS CLI Using the APIs and SDKs to create an encrypted file system is outside the scope of this paper but you can learn more about how this is done by readin g Amazon EFS API in the Amazon EFS User Guide3 or the SDK documentation4 Basic Concepts and Terminology This section defines concepts and terminology referenced in this whitepaper • Amazon Elastic File System (Amazon EFS ) – A highly available and highly durable service that provides simple scalable shared file storage in the AWS C loud Amazon EFS provides a standard file system interface and file system semantics You can store virtually an unlimited amount of data across an unconstrained number of storage servers in multiple Availability Zones • AWS Identity and Access Management (IAM) 5 – A service that enables you to securely co ntrol fine grained access to AWS service APIs Policies are created and used to limit access to individual users groups and roles You can manage your AWS KMS keys t hrough the IAM console ArchivedAmazon Web Services – Encrypting File Data with Amazon Elastic File System Page 2 • AWS KMS – A managed service that makes it easy for you to create and manage the encryption keys used to encrypt your data It is fully integrated with AWS CloudTrail to provide logs of API calls made by AWS KMS on your behalf to help meet compliance or regulatory requirements • Customer master key (CMK) – Represents the top of your key hierarchy It contains key material to encrypt and decrypt data AWS KMS can generate this key material or you can generate it and then import it into AWS KMS CMKs are specific to an AWS account and AWS Region and can be customer managed or AWS managed o AWS managed CMK – A CMK that is generated by AWS on your behalf An AWS managed CMK is created when you enable encryption for a resource of an integrated AWS service AWS managed CMK key policies are managed by AWS and you cannot change th em There is no charge for the creation or storage of AWS managed CMKs o Customer managed CMK – A CMK you create by using the AWS Management Console or API AWS CLI or SDKs You can use a customer managed CMK when you need more granular control over the CM K • KMS permissions – Permissions that control a ccess to a customer managed CMK These permissions are defined using the key policy or a combination of IAM policies and the key policy For more information see Overview of Managing Access in the AWS KMS Developer Guide6 • Data keys – Cryptographic keys generated by AWS KMS to encrypt data outside of AWS KMS AWS KMS allows authorized entities to obtain data keys protected by a CMK • Transport Layer Security ( TLS formerly called Secure Sockets Layer [SSL]) – Cryptographic protocols essential for encrypting information that is exchanged over the wire • EFS mount helper – A Linux client agent (amazon efsutils) used to simplify the mounting of EFS file systems It can be used to setup maintain and route all NFS traffic over a TLS tunnel ArchivedAmazon Web Services – Encrypting File Data with Amazon Elastic File System Page 3 For more information about basic concepts and terminology see AWS Key Management Service Concepts in the AWS KMS Developer Guide7 Encryption of Data at Rest You can create an encrypted file system so all your data and metadata is encrypted at rest usi ng an industry standard AES 256 encryption algorithm Encryption and decryption is handled automatically and transparently so you don’t have to modify your applications If your organization is subject to corporate or regulatory policies that require encryption of data and metadata at rest we recommend creating an encrypted file system Managing Keys Amazon EFS is integrated with AWS KMS which manages the encryption keys for encrypted file systems AWS KMS also supports encryption by other AWS services such as Amazon Simple Storage Service ( Amazon S3 ) Amazon Elastic Block Store ( Amazon EBS ) Amazon Relational Database Service ( Amazon RDS ) Amazon Aurora Amazon Redshift Amazon WorkMail Amazon WorkSpaces etc To encrypt file system contents Amazon EFS uses the Advanced Encryption Standard algorithm with XTS Mode and a 256 bit key (XTS AES 256) There are three important questions to answer when considering how to secu re data at rest by adopting any encryption policy These questions are equally valid for data stored in managed and unmanaged services Where are keys stored? AWS KMS stores your master keys in highly durable storage in an encrypted format to help ensure that they can be retrieved when needed Where are keys used? Using an encrypted Amazon EFS file system is transparent to clients mounting the file system All cryptographic operations occur within the EFS service as data is encrypted before it is written to disk and decrypted after a client issues a read request ArchivedAmazon Web Services – Encrypting File Data with Amazon Elastic File System Page 4 Who can use the keys? AWS KMS key policies control access to encryption keys You can combine them with IAM policies to provide another layer of control Each key has a key policy If the key is a n AWS managed CMK AWS manages the key policy If the key is a customer managed CMK you manage the key policy These key policies are the primary way to control access to CMKs They define the permissions that govern the use and management of key s When you create an encr ypted file system you grant the EFS service access to use the CMK on your behalf The calls that Amazon EFS makes to AWS KMS on your behalf appear in your CloudTrail logs as though they originated from your AWS account For more information about AWS KMS and how to manage access to encryption keys see Overview of Managing Access to Your AWS KMS Resources in the AWS KMS Developer Guide8 For more information about how AWS KMS manages cryptography see the AWS KMS Cryptographic Details whitepaper 9 For more information about how to create an administrator IAM user and group see Creating Your First IAM Admin User and Group in the IAM User Guide 10 Creating an Encrypted File S ystem You can create an encrypted file system using the AWS Management Console AWS CLI Amazon EFS API or AWS SDKs You can only enable encryption for a file system when you create it Amazon EFS integrates with AWS KMS for key management and uses a CMK to encrypt the file system File system metadata such as file names directory names and directory contents are encrypted and decrypted using an EFS managed key The contents of your files or file data is encrypted and decrypted using a CMK that you choose The CMK can be one of thre e types : • An AWS managed CMK for Amazon EFS • A customer managed CMK from your AWS account • A customer managed CMK from a different AWS account ArchivedAmazon Web Services – Encrypting File Data with Amazon Elastic File System Page 5 All users have an AWS mana ged CMK for Amazon EFS whose alias is aws/elasticfilesystem AWS manages this CMK ’s key policy and you cannot change it There is no cost for creating and storing AWS managed CMKs If you decide to use a customer managed CMK to encrypt your file system select the key alias of the customer managed CMK that you own or enter the Amazon Resource Name ( ARN ) of a customer managed CMK that is owned by a different account With a customer managed CMK that you own you control which user s and services can use the key through key policies and key grants You also control the life span and rotation of t hese keys by choosing when to disable re enable delete or revoke access to them AWS KMS charges a fee for creating and storing customer managed CMK s For information about managing access to keys in other AWS accounts see Allowing External AWS Accounts to Access a CMK in the AWS KMS Developer Guide11 For more informati on about how to mana ge customer managed CMKs see AWS Key Management Service Concepts in the AWS KMS Developer Guide12 The following sections discuss how to create an encrypte d file system using the AWS Management Console and using the AWS CLI Creating an Encrypted File System Using the AWS Management Console To create an encrypted Amazon EFS fi le system using the AWS Management Console follow these steps 1 On the Amazon EFS console select Create file system to open the file system creation wizard 2 For Step 1: Configure file system access choose your VPC create your mount targets and then choose Next Step 3 For Step 2: Configure optional settings add any tags choose your performance mode select the b ox to enable encryption for your file system select a KMS master key and then choose Next Step ArchivedAmazon Web Services – Encrypting File Data with Amazon Elastic File System Page 6 Figure 1: Enabling encryption through the AWS Management Console 4 For Step 3: Review and create review your settings and choose Create File System Creating an Encrypted File System Using the AWS CLI When you use the AWS CLI to create an encrypted file system you use additional parameters to set the encryption status and customer managed CMK Be sure you are using the latest vers ion of the AWS CLI For information about how to upgrade your AWS CLI see Installing the AWS Command Line Interface in the AWS Command Line Interface User Guide 13 In the CreateFileSystem operation the encrypted parameter is a Boolean and is required for creating encrypted file systems The kms key id is required only when you use a customer managed CMK and you include the key’s alias or ARN Do not include this parameter if you’re using the AWS managed CMK $ aws efs create filesystem \ creation token $(uuidgen) \ performance mode generalPurpose \ encrypted \ kmskeyid user/ customer managedCMKalias For more information about creating Amazon EFS file sys tems using the AWS Management Console AWS CLI AWS SDKs or Amazon EFS API see the Amazon EFS User Guide 14 ArchivedAmazon Web Services – Encrypting File Data with Amazon Elastic File System Page 7 Using an Encrypted File System Encryption has minimal effect on I/O latency and throughput Encryption and decryption are transparent to users applications and services All data and metadata is encrypted by Amazon EFS on your behalf before it is written to disk and is decrypted before it is read by clients You don’t need to change client tools applications or services to access an encrypted file system Enforcing Encryption of Data at Rest Your organization might require the encryption of all data that meets a specific classification or is associated with a particular application workload or environment You can enforce data encryption policies for Amazon EFS file systems by using detective controls that detect the creation of a file system and verify that encryption is enabled If an unencrypted file system is detected you can respond in a number of ways ranging from deleting the file sys tem and mount targets to notifying an administrator Be aware that if you want to delete the unencrypted file system but want to retain the data you should first create a new encrypted file system Next you should copy the data over to the new encrypted file system After the data is copied over you can delete the unencrypted file system Detecting Unencrypted File Systems You can create an Amazon CloudWatch alarm to monitor CloudTrail logs for the CreateFileSystem event and trigger an alarm to notify an administrator if the file system that was created was unencrypted Creating a Metric Filter To create a CloudWatch alarm that is triggered when an unencrypted Amazon EFS file system is created follow this procedure You must have an exi sting trail created that is sending CloudTrail logs to a CloudWatch Logs log group For more information see Sending Events to CloudW atch Logs in the AWS CloudTrail User Guide 15 1 Open the CloudWatch console at https://consoleawsamazoncom/cloudwatch/ ArchivedAmazon Web Services – Encrypting File Data with Amazon Elastic File System Page 8 2 In the navigation pane choose Logs 3 In the list of log groups c hoose the log group that you created for CloudTrail log events 4 Choose Create Metric Filter 5 On the Define Logs Metric Filter page choose Filter Pattern and then type the following: { ($eventName = CreateFileSystem) && ($responseElementsencrypted IS FALSE ) } 6 Choose Assign Metric 7 For Filter Name type UnencryptedFileSystemCreated 8 For Metric Namespace type CloudTrailMetrics 9 For Metric Name type UnencryptedFileSystemCreatedEventCount 10 Choose Show advanced metric settings 11 For Metric Value type 1 12 Choose Create Filter Creating an Alarm After you create the metric filter follow thi s procedure to create an alarm 1 On the Filters for Log_Group_Name page next to the UnencryptedFileSystemCreated filter name choose Create Alarm 2 On the Create Alarm page set the parameters shown in Figure 2 ArchivedAmazon Web Services – Encrypting File Data with Amazon Elastic File System Page 9 Figure 2: Create a Cloud Watch alarm 3 Choose Create Alarm Testing the Alarm for Unencrypted File System Created You can test the alarm by creating an unencrypted file system as follows 1 Open the Amazon EFS console at https://consoleawsamazoncom/efs 2 Choose Create File System 3 From the VPC list choose your default VPC 4 Select the check boxes for all the Availability Zones Be sure that they all have the default subnets automatic IP addresses and the default security groups chosen These are your mount targets 5 Choose Next Step 6 Name your file system and keep Enable encryption unchecked to create an unencrypted file system 7 Choose Next Step 8 Choose Creat e File System Your trail logs the CreateFileSystem operation and delivers the event to your CloudWatch Logs log group The event triggers your metric alarm and CloudWatch Logs sends you a notification about the change ArchivedAmazon Web Services – Encrypting File Data with Amazon Elastic File System Page 10 Encryption of Data in Transit You ca n mount a file system so all NFS traffic is encrypted in transit using Transport Layer Security 12 (TLS formerly called Secure Sockets Layer [SSL] ) with an industry standard AES 256 cipher TLS is a set of industry standard cryptographic protocols used for encrypting information that is exchanged over the wire AES 256 is a 256 bit encryption cipher used for data transmission in TLS If your organization is subject to corporate or regulatory policies that require en cryption of data and metadata in transi t we recommend setting up encryption in transit on every client accessing the file system Setting up Encryption of Data in T ransit The recommended method to setup encryption of data in transit is to download the EFS mount helper on each client The EFS mount helper is an open source utility that AWS provides to simplify using EFS including setting up encryption of data in transit The mount helper uses the EFS recommended mount options by default 1 Install the EFS mount helper • Amazon Linux: sudo yum install y amazon efsutils • Other Linux distributions: download from GitHub (https://githubcom/aws/efs utils ) and install • Supported Linux distributions: o Amazon Linux 201709+ o Amazon Linux 2+ o Debian 9+ o Red H at Enterprise Linux / CentOS 7+ o Ubuntu 1604+ • The amazon efsutils package automatically installs the following dependencies: ArchivedAmazon Web Services – Encrypting File Data with Amazon Elastic File System Page 11 o NFS client (nfs utils) o Network relay (stunnel) o Python 2 Mount the file system: sudo mount t efs o tls filesystemid efsmountpoint • mount t efs invokes the EFS mount helper • Using the DNS name of the file system or the IP address of a mount target is not supported when mounting using the EFS mount helper use the file system id instead • The EFS mount helper uses the AWS recommended mount options by default Overriding these default mount options is not recommended but we provide the flexibility to do so when the occasion arises We recommend thoroughly testing any mount option overrides s o you understand how these changes impact file system access and performance • Below are the default mount options used by the EFS mount helper o nfsvers=41 o rsize=1048576 o wsize=1048576 o hard o timeo=600 o retrans=2 3 Use the file fstab to automatically remount your file system after any system restart • Add the following line to /etc/fstab filesystemid efsmountpoint efs _netdevtls 0 0 ArchivedAmazon Web Services – Encrypting File Data with Amazon Elastic File System Page 12 Using Encryption of Data in Transit If your organization is subject to corporate or regulatory policies that require encrypt ion of data in transit we recommend using encryption of data in transit on every client accessing the file system Encryption and decryption is configured at the connection level and adds another layer of security Mounting the file system using the EFS m ount helper sets up and maintains a TLS 12 tunnel between the client and the Amazon EFS service and routes all NFS traffic over this encrypted tunnel The certificate used to establish the encrypted TLS connec tion is signed by the Amazon C ertificate Authority (C A) and trusted by most modern Linux distributions The EFS mount helper also spawns a watchdog process to monitor all secure tunnels to each file system and ensures they are running After using the EFS mount helper to establish encrypted connections to Amazon EFS no other user input or configuration is required Encryption is transparent to user connections and applications accessing the file system After successfully mounting and establishing an encrypted connection to an EFS file system using the EFS mount helper the output of a mount command shows the file system is mounted and an encrypted tunnel has been established using the localhost (127001) as the network relay See samp le output below 127001:/ on efs mountpoint type nfs4 (rwrelatimevers=41rsize=1048576wsize=1048576namlen=255har dproto=tcpport=20059timeo=600retrans=2sec=sysclientaddr=12 7001local_lock=noneaddr=127001) To map an efsmount point to an EFS file system query the mountlog file in /var/log/amazon/efs and find the last successful mount operation This can be done using a simple grep command like the one below grep E "Successfully mounted* efsmountpoint" /var/log/amazon/efs/mountlog | tail 1 The output of this grep command will return the DNS name of the mounted EFS file system See sample output below ArchivedAmazon Web Services – Encrypting File Data with Amazon Elastic File System Page 13 20180315 07:03:42363 INFO Successfully mounted filesystemidefsregionamazonawscom at efs mountpoint Conclusion Amazon EFS file system data can be encrypted at rest and in transit You can encrypt data at rest by using CMKs that you can control and manage using AWS KMS Creating an encrypted file system is as simple as selecting a check box in the Amazon EFS file system cr eation wizard in the AWS Management Console or adding a single parameter to the CreateFileSystem operation in the AWS CLI AWS SDKs or Amazon EFS API Using an encrypted file system is also transparent to services applications and users with minimal e ffect on the file system’s performance You can encrypt data in transit by using the EFS mount helper to establish an encrypted TLS tunnel on each client encrypting all NFS traffic between the client and the mounted EFS file system Encryption of both data at rest and in transit is available to you at no additional cost Contributors The following individuals and organizations contributed to this document: • Darryl S Osborne storage specialist solutions architect AWS • Joseph Travaglini sr product manager Amazon EFS Further Reading For additional information see the following : • AWS KMS Cryptographic Details Whitepaper16 • Amazon EFS User Guide17 Document Revisions Date Description April 2018 Added encryption of data in transit ArchivedAmazon Web Services – Encrypting File Data with Amazon Elastic File System Page 14 Date Description September 2017 First publication 1 https://awsamazoncom/efs/ 2 https://awsamazoncom/kms/ 3 https://docsawsamazoncom/efs/latest/ug/API_CreateFileSystemhtml 4 https://awsamazoncom/tools/ sdk 5 https://awsamazoncom/iam/ 6 https://docsawsamazoncom/kms/latest/developerguide/control access overviewhtml 7 https://docsawsamazoncom/kms/latest/developerguide/conceptshtml 8 https://docsawsamazoncom/kms/latest/developerguide/control access overviewhtml managing access 9 https://d0awsstaticcom/whitepapers/KMS Cryptographic Detailspdf 10 https://docsawsamazoncom/IAM/la test/UserGuide/getting started_create admin grouphtml 11 https://docsawsamazoncom/kms/latest/developerguide/key policy modifyinghtml keypolicy modifying external accounts 12 https://docsawsamazoncom/kms/latest/developerguide/conceptshtml master_keys 13 https://docsawsamazoncom/cli/latest/userguide/installinghtml 14 https://docsawsamazoncom/efs/latest/ug/whatisefshtml 15 https://docsawsamazoncom/awscloudtrail/latest/userguide/send cloudtrail events tocloudwatch logshtml 16 https://awsamazoncom/whitepapers/ 17 https://docsawsamazoncom/efs/latest/ug/whatisefshtml Notes
General
AWS_Security_Checklist
AWS Security Checklist This checklist provides customer recommendations that align with the WellArchitected Framework Security Pillar Identity & Access Management 1 Secure your AWS account Use AWS Organizations to manage your accounts use the root user by exception with multi factor authentication (MFA) enabled and configure account contacts 2 Rely on centralized identity provider Centralize identities using either AWS Single Sign On or a thirdparty provider to avoid routinely creat ing IAM users or using longterm access keys —this approach makes it easier to manage multiple AWS accounts and federated applications 3 Use multiple AWS accounts to separate workloads and workload stages such as production and non production Multiple AWS accounts allow you to separate data and resources and enab le the use of Service Control Policies to implement guardrails AWS Control Tower can hel p you easily set up and govern a multi account AWS environment 4 Store and use secrets securely Where you cannot use temporary credentials like tokens from AWS Security Token Service store your secrets like database passwords using AWS Secrets Manager which handles encryption rotation and access control Detection 1 Enable foundational services: AWS CloudTrail Amazon GuardDuty and AWS Security Hub For all your AWS accounts configure CloudTrail to log API activity use GuardDuty for continuous monitoring and use AWS Security Hub for a comprehensive view of your security posture 2 Configure service and application level logging In addition to your application logs enable logging at the service level such as Amazon VPC Flow Logs and Amazon S3 CloudTrail and Elastic Load Balancer access logging to gain visibility into events Configure logs to flow to a central account and protect them from manipulation or deletion 3 Configure monitoring and alerts and investigate events Enable AWS Config to track the history of resources and Config Managed Rules to automatically alert or remediate on undesired changes For all your sources of logs and events from AWS CloudTrail to Amazon GuardDuty and your application logs configure alerts for high priority events and investigate Infrastructure Protection 1 Patch your operating system applications and code Use AWS Systems Manager Patch Manager to automate the patching process of all systems and code for which you are responsible including your OS applications and code dependencies AWS Security Checklist 2 Implement distributed denial ofservice ( DDoS ) protection for your internet facing resources Use Amazon Cloudfront AWS WAF and AWS Shield to provide layer 7 and layer 3/ layer 4 DDoS protection 3 Control access using VPC Security Groups and subnet layers Use security groups for controlling i nbound and outbound traffic and automatically apply rules for both security groups and WAFs using AWS Firewall Manager Group different resources into different subnets to create routing layers for example database resources do not need a route to the internet Data Protection 1 Protect data at rest Use AWS Key Management Service (KMS) to protect data at rest across a wide range of AWS services and your applications Enable default encryption for Amazon EBS volumes and Amazon S3 buckets 2 Encrypt data in transit Enable encryption for all network traffic including Transport Layer Security (TLS) for web based network infrastructure you control using AWS Certificate Manager to manage and provision certificates 3 Use mechanisms to keep people away from data Keep all users away from directly accessing sensitive data and systems For example provide a n Amazon QuickSight dashboard to business users instead of direct access to a database and perform actions at a distance using AWS Systems Manager automation documents and Run C ommand Incident Response 1 Ensure you have an incident response (IR) plan Begin your IR plan by building runbooks to respond to unexpected events in your workload For details see the AWS Security Inciden t Response Guide 2 Make sure that someone is notified to take action on critical findings Begin with GuardDuty findings Turn on GuardDuty and ensure that someone with the ability to take action receives the notification s Automatically creating trouble tickets is the best way to ensure that GuardDuty findings are integrated with your operational processes 3 Practice respo nding to events Simulate and practice incident response by running regular game days incorporating the lessons learned into your incident management plans and continuously improving them For more best practices see the Security Pillar of the Well Architected Framework and Security Documentation Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create a ny commitmen ts or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to i ts customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2020 Amazon Web Services Inc or its affiliates All rights reserved
General
Amdocs_Optima_Digital_Customer_Management_and_Commerce_Platform_in_the_AWS_Cloud
This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmdocs Digital Brand Experience Platform in AWS Cloud First Published February 2018 Updated November 18 2021 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlContents Introduction 1 BSS applications are mission critical workloads 2 Amdocs BSS portfolio 3 Amdocs Digital Brand Experience Suite overview 3 Functional capabilities 4 Functional architecture 8 Data management 11 Digital Brand Experience Suite deployment architecture 13 Technical architecture 13 Digital Brand Experience Suite SaaS model 19 AWS Well Architected Framework 21 Conclusion 24 Contributors 24 Further reading 25 Document revisions 25 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAbstract Amdocs Digital Brand Experience Suite is a digital customer management and commerce platform designed to rapidly and securely monetize any product or service Serving innovative communications operators utilities and other subscription based service providers Digital Brand Experience Suite ’s open platform has been available onpremises but is now also available on the AWS Cloud This whitepaper provides an architectural overview of how the Digital Brand Experience Suite business support systems (BSS) solution operates on the AWS Cloud The document is written for executive s architect s and development teams that want to deploy a business support solution for their consumer or enterprise business on the AWS Cloud This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 1 Introduction Amdocs provides the Amdocs Digital Brand Experience Suite: a digital customer management commerce and monetization software as a service ( SaaS ) solution designed specifically for the needs of digital brands and other small service providers who need to provide digital experience to their customers while being agile innovative and with rapid time to market The Amdocs solution helps these commun ications service provider s (CSPs) to focus on their business by simplifying their business support through prebuilt packages of business and technical processes spanning the full customer lifecycle: care commerce ordering and monetization Provided as a service the solution is ready to support simple models with minimal time to market including integrations to key external partners and an extensive set of application programming interface s (APIs ) More complex business models can be configured in the s ystem and integrations within bespoke ecosystems are supported through the open API architecture The enterprise market in particular involves unique challenges that require an industry proven solution Service providers focusing on the enterprise and sma ll and medium sized enterprise (SME) business segments can deliver a significant increase in revenue and market share However when trying to perform an enterprise business strategy many operators find they lack the required capability to support the continuous demand for their corporate services They find that their BSS platforms lack business flexibility and operational efficiency and are not cost effective Key challenges include : underperforming systems the high cost of managing legacy operation s and maintaining regulatory compliance Many companies need to adopt a pan Regional architecture to onboard additional countries Regions customer verticals and products This situation demands a significant change in both revenue and customer manageme nt systems as well as in the IT environment This whitepaper provides an overview of the Amdocs Digital Brand Experience platform and a reference architecture for deploying Amdocs on AWS This whitepaper also discusses the benefits of running the platform on AWS and various use cases By running Amdocs Digital Brand Experience on the AWS Cloud and especially delivered as SaaS the Amdocs platform can deliver significant required improvements to the operations and capabilities of customers in every indust ry while enabling future growth and expansion to new domains Customers can also benefit from the compliance and security credentials of the AWS Cloud instead of incurring an ongoing cost of audits related to storing customer data This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 2 BSS applications are mi ssion critical workloads BSS are the backbone of a service provider’s customer facing strategy BSS encompasses the spectrum from marketing shopping ordering charging taxation invoicing payments collection dunning and ultimately financial reporting There are four primary domains : product management order management revenue management and customer management Product management Product management supports the sellable entities or catalog of a provider From conception to sale to revenue recognition this is the toolset for managing services products pricing discounts and many other attributes of the product lifecycle Order management Order management is an extension of the sales process and encompasses four areas: order decomposition order orchestration order fallout and order status management Ordering may be synchronous where service is enabled in real time Or the actual service delivery may take days with complex installation processes It is incumbent on the BSS to accurat ely and efficiently process ing orders avoiding fallout s while providing status both to the service provider and the customer Revenue management Revenue management focuses on the financial aspects of the business both from the customer and service provi der perspective It includes pricing charging and discounting those feeds into the invoicing process and taxing The invoice in turn feeds the accounts receivable processes —payment collection and dunning —and becomes the foundation for revenue recognition reporting ( general ledger) C onsumer billing for consumer enterprise and wholesale services as well as prepaid and postpaid models are supported in the system Revenue management also include s fraud management and revenue assurance Customer management The relationship of the service provider to their customers is of critical importance From the initial contact through self care and mobile applications shopping online and to customer care i t is important to provide the multi channel exposure of a single customer view Complex customer models are supported through robust mechanisms of customer groups Enterprises are modeled through a combination of accounts This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 3 hierarchies groups and organiza tions —providing support for real world charg ing billing and reporting responsibilities Amdocs BSS portfolio Amdocs is a software and services vendor with nearly 40 years of expertise specifically focused on the communications and media industry It’s a trusted partner to the world’s leading communications and media companies serving more than 350 service providers in more than 85 countries Amdocs’ product lines encompass digital customer experience monetization network and service automation and mor e supporting more than 17 billion digital customer journeys every day Amdocs C ES21 is a 5G native integrated BSS operations support system (OSS ) suite It is a cloud native open and modular suite that supports many of the world’s top CSPs on their dig ital and 5G journeys The Amdocs Digital Brand Experience Suite is a SaaS solution that’s specifically built for the needs of digital brands and other small service providers It is a pre integrated suite with an extensive set of built in processes and con figuration templates to simplify commerce care ordering and monetization and empowering business users through “shift left” to a truly digital experience for the BSS itself As SaaS it provides unparalleled time to market and scalability while benefi tting from Amdocs ’ robust operations and a “pay as you grow” business model Amdocs Digital Brand Experience Suite overview Amdocs Digital Brand Experience Suite provides flexibility while implementing a high level of complexity It enables customers to capitalize on digital era opportunities by growing customer’s business with an open system that seamlessly interacts with ancillary app lication s It offers the freedom to address a div erse set of product and service markets as well as a range of end customer types Encompassing a set of established and progressive BSS products Amdocs Digital Brand Experience Suite represents proven functionality under a preconfigured industry standar d integration layer This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 4 Configurability smart interoperability and consistent experience • Swift onboarding of the service provider onto the platform With the SaaS solution onboarding can be done immediately Complex business models and dedicated instances of Digital Brand Experience Suite for larger service providers take slightly longer • Timetomarket for new products services and bundles occurs in minutes instead of months • Simple table driven configuration doesn’t require codin g The data model is highly flexible without requiring software changes • Provides s upport for multiple lines of business Within a single instance or tenant Amdocs Digital Brand Experience Suite supports any number of li nes of business (mobile fixed line broadband cable finance and utilities) and uses a flexible catalog to offer converged services to a sophisticated market Flexible deployment • Multi tenancy capabilities allow for a “define once utilize many” strateg y as different tenants are hosted on a single hardware and software platform that is operated in one location CSPs can deploy Amdocs Digital Brand Experience Suite on AWS as a service or as a dedicated instance Support options • Amdocs offers support for subscription usage based and “billing as a service” models over multiple networks and protocols of any kind and across borders In addition Amdocs supports any service product and payment method as well as multiple currencies and languages Open and secure integration model • More than 500 o penstandard partner friendly pre integrated microservices use RESTful service methods • Secur ity and compliance is provided by both AWS Cloud and the Digital Brand Experience Suite architecture Functional capa bilities The Digital Brand Experience Suite comes with the following capabilities: This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 5 Digital channels • Responsive with multi modal web presentation layer – Multimodal user interfaces provide users with different ways of interacting with applications This has advantages both in providing interaction solutions with additional robustness in environments • Bespoke native mobile application – The goal of bespoke software or mobile apps is to create operational efficiency reduce cost improve retention and drive up revenue • Selfcare – Web interface enables customers to use the selfservice capability • Customer service representative (CSR) interfaces – The customer service interface includes tools and information for supporting the system admin users customers and transactions Business process foundation • Identity management – Authentication roles user management and single sign on • Security usage throttling service level agreements ( SLAs ) – Authorization metrics and SLA enforcement around exposed northbound APIs • Microservice based REST APIs – API framework to deliver business services through a standardized REST API model • Configurable service logic – Orchestration of underlying APIs to deliver business oriented functions enhanced flexibility and extensibility • Data mapping – Management of the Digital Brand Experience Suite data model and virtualization of external third party applications • Commerce catalog – Rules matching products and services to customers Rules can be based on account segment hierarchy geography equipment serviceability or any number of other factors and defined business processes serving both B2B and B2C customers With optional intelligence capabilities the rules can be extended to support marketing campaigns such as Next Best Offer /Next Best Action ( NBO/ NBA) • Shopping cart – Product browsing and search cart item management (including product options and features) and pricing This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 6 • Quotation service – A view into what a bill would look like for a given order including prices discounts and taxation • Messaging – Asynchronous message queuing technology with persistence for internal event notification and synchronization and routing to the relevant professional (system admin istrator CSR and so on ) Customer management layer capabilities • Customer management – Definition of customer profiles customer interactions and customer hierarchies supporting simple to extremely complex B2B hierarch ies and B2C scenarios • Case management – Customer interaction mechanism which can initiate actions in the system and queue up issues for service provider personnel Configurable rules determine actions and routing for a particular case • Inventory – Manag es serialized logical inventory for association to billing products Inventory can be categorized by type or line with corequisite rules defined in the catalog • Resource management – Manages dynamic lifecycle policy for all resources Revenue management • Billing rules – Configurable management of rules related to the billing operation This is the foundation for how charges are derived from a combination of price and customer service attributes • Event and order fulfillment – A workflow driven process to pro vision and activate billing orders in the system This involves instantiation of the relevant products to their respective customer databases • Usage and file processing – Integrity checks on the input event usage files before passing to rating • Rating engine – Offline and online rating engine including filebased offline rating typically for prepaid and postpaid subscribers The rating engine can use multiple factors related to the subscriber account and service to calculate the price for the usage o Offline rating engine – Filebased offline rating typically for postpaid subscribers This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 7 o Online rating engine – Real time rating and promotional calculations based on network events • Rated usage management – Persistence and indexing of billed unbille d non billable usage and usage details • Bill preparer – The billing processor (BIP) identifies accounts within a particular bill cycle gathers data for bill processing calculates billable charges and generates processed information for bill formatting • Billtime discount – Calculates bill time discounts based on total usage for the period total charges and applicable discount tiers • Billtime taxation – Calculates appropriate taxes given the geography account information info and installed tax packages • Invoice generator (IGEN) – Combines the processed bill information from the BIP with invoice formats from the invoice designer to produce formatted bills The IGEN supports conditional logic in the templates and multi language pres entation formats • Accounts receivable (AR) balance management – Applies bill charges to an account’s AR balances Thresholds defined against the balance may trigger notifications and/or lifecycle state changes • Payments – Requests for payment payment hist ory and payment profiles • Adjustments and refunds – Allow for charges to be disputed adjusted or fully refunded A manager approval mechanism with workflow ensures that all adjustments have been reviewed and authorized • Journal ( general ledger) feeds – Reporting function that maps all financially significant activities in the system to operator defined general ledger codes Journaling generates feed files on a regular basis with the charges organized based on the specified codes and categories These f iles are then imported into the operator’s account systems • Collections – Driven process through which past due bills launch various external notification and collection activities ultimately leading to debt resolution or write off Interfaces are provide d to restore account state upon successful collection action • Recharge – Balance allotments and related promotions launch ed by recharge actions This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 8 • Balance management – Full lifecycle of cyclical authorization balances updated in real time • Online promotions – Realtime bonus awards and discounts applied immediately to balances • Notification s – Threshold based external notification s (for example invoked in response to a low balance ) Order management • Order management – Processing of ordered servi ces and their elements prior to order fulfillment Typically initiated at the end of the shopping experience t his can include editing or cancelling pending orders or forcing pending orders to immediately activate workflow driven processes configured to m eet business needs • Order fulfillment – A workflow driven process to provision and activate orders in the system Configurable m ilestones define the workflow model for each service and may involve many steps a route to service activation on thirdparty sy stems • Provisioning – Runs the provisioning processes of all ordered services on various network s including : Home Location Registers unified communication platforms electrical grids media servers Home Subscriber Servers and others • Network protocol integration – Supports authentication authorization and accounting functionality for all types of online and offline charging as well as major network protocols Formats are provided for common event record types Interfaces to online charging system (OCS) support all the protocols involved in voice and data charging especially 5G Functional architecture Digital Brand Experience Suite architecture includes three layers: user experience integration and application The following diagram illus trates the high level architecture This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 9 Digital Brand Experience Suite functional architecture This whitepaper focuses primarily on the integration and application layers because these features are deployed in AWS While the UI applications are downloaded from AWS the actual UI runtime occurs client side The APIs of the integration layer support the Digital Brand Experience Suite user interfaces ( UIs) as well as other thirdparty client integr ations These APIs expose the capabilities of the application layer as well as orchestrate the different applications to form higher level business services Integration layer capabilities are marked in the green box and application layer capabilities ar e marked in the blue box Additional detailed capabilities can be reviewed in the following diagram This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 10 Digital Brand Experience Suite functional capabilities Note that the OCS domain in t he preceding diagram depicts a reference implementation; integration with an OCS (as well as the specific OCS used) is an optional aspect of the Digital Brand Experience Suite solution Integration layer capabilities • Throttling and SLAs – Metrics and SLA reporting around the exposed northbound APIs • Identity management – Centralized authentication and authorization • Business logic and i ntegration – Service oriented APIs and their supporting capabilities • Commerce catalog – Definition and management of products related to the shopping experience Includes eligi bility aspects references to marketing collateral bundling constructions and so forth • Commerce engine – Technical APIs to manage shopping carts and catalog browsing • Extensible business logic – Business rules which extend the core logic of the APIs This also includes business process management to model flow based scenarios such as case handling and post checkout approval This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 11 • Dynamic data storage – Persistence for objects that are required for Digital Brand Experience Suite capabilities but not part of the existing and native application models This i ncludes things like consents contacts metadata for order supporting documentation and assigned and applied product instances Application layer capabilities • Billing catalog – Definition and management of products related to the billing operation Products and their elements include rate plans discount plans recurring and non recurring charges and associated configuration Product lifecycle allows for advance sales windows sunsetting and so forth For o ther billing application capabilities refer to the Revenue management section of this document Data management The following diagram shows the main entities managed by Digital Brand Experience Suite with the functional domains which are primarily responsible for each Digital Brand Experience Suite functional domains Optima Web UI Business Logic & Integration Layer BSS Application Commerce Engine Dynamic Data Business Process / Case Document MetadataApplied Products Shopping Carts BPO/SPO/AddOn CollateralBook Pricing / DiscountsEligibility Rules Compatibility Rules Dependency Rules Cart Validation Rules Collections WorkflowsLogical InventoryPayments Package/Component/Element Adjustments / Refunds Balances Invoice Details Invoice Formats Orders / ServiceOrders Workflow Jobs BT Discounts / Promos BT Rates / TaxPrivacy / Consent Contact / Individual Contracts / Signatures Business Processes Case Handling Rules BP/Case Workflow DefinitionsCase Instance Data BP/Case Workflow InstancesInteractions / Notes Rated Usage AuroraAuroraAurora CouchbaseBT Charges Account/ServiceBusiness Access Layer AuroraUsers / User GroupsRoles / Permissions Amazon Aurora Amazon AuroraAmazon Aurora Amazon AuroraThis version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 12 Benefits of deploying Digital Brand Experience Suite on AWS With the increase of the subscriber base and high demands of 5G cost reduction becomes an essential factor to build a successful business model CSPs that are running Digital Brand Experience Suite on AWS will pay only for the resources they use With the “pay as you go ” model customers also can spin u p experiment and iterate BSS environments (testing dev and so forth ) and pay based on consumption An on premises environment usually provides a limited set of environments to work with—provisioning additional environments can take a long time or migh t not be possible With AWS CSPs can create virtually man y new environments in minutes as required In addition CSPs can create a logical separation between projects environments and loosely decoupled application thereby enabling each of their teams to work independently with the resources they need Teams can subsequently converge in a common integration environment when they are ready At the conclusion of a project customers can shut down the environment and cease payment Customers often over size on premises environments for the initial phases of a project but subsequently cannot cope with growth in later phases With AWS customers can scale their compute resources up or down at any time Customers pay only for the individual services they need for as long as they use them In addition customers can change instance sizes in minutes through AWS Management Console AWS API or AWS Command Line Interface (AWS CLI) Because of the exponential growth of data worldwide and specifically in the telecom world designing and deploying backup solutions has become more complicated With AWS c ustomers have multiple options to set up a disaster recovery strategy depending on the recovery point objective (RPO) and recovery time objective (RTO) using the expansive AWS Global Cloud I nfrastructure Amdocs Digital Brand Experience Suite platform offers rich product and service management capabilities which can be integrated with AWS Cloud Analytics services for use cases such as subscriber customer and usage analytics Digital Brand Experience Suite capabilities can be also empowered by machine learning and artificial intellige nce capabilities through AWS services This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 13 Digital Brand Experience Suite deployment architecture Although there are multiple options for deploying the Digital Brand Experience Suite into an AWS environment the diagrams in this section primarily focus on depl oying into a multi tenant SaaS architecture Where possible common aspects of the architecture for nonSaaS deployments will be highlighted Technical architecture Common deployment architecture The following diagram depicts the main resources deployed for the Digital Brand Experience Suite The application is using the same AWS services regardless of the nature of the cloud deployment ( for example SaaS vs non SaaS) Digital Brand Experience Suite common cloud resources detail The Digital Brand Experience Suite uses Amazon Virtual Private Cloud (VPC) that is divided into three subnets which organize the access compute and storage resources needed for the Digital Brand Experience Suite All of these subnets are private —access is handled by a demilitarized zone (DMZ ) such as the inbound services VPC of the SaaS offering VPC BSS DBCustomers Amazon S3 – Web UI Amazon CloudWatch Amazon S3 Amazon ECR AWS Systems ManagerEndpoints Security groupSecurity group Security group Security group Security group Security group Security groupSecurity group AWS PrivateLink – For customers AWS PrivateLink – For Amdocs platform AWS Lambda – Web UI backend AWS Lambda – Payment gatewaySecurity group Amazon API Gateway Amazon EFS Amazon EKS Amazon EC2 –BAL Amazon EC2 –BIL Amazon EC2 –ESB Amazon EC2 –BSS Amazon EC2 – BP Batch Amazon Aurora Bill DB Amazon Aurora –BSS DB Amazon EC2 –Couchbase on EC2 Application Load Balancer (ALB) Customers ALB – Amdocs platform This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 14 Customers subnet The customers subnet provides access and load balancing capabilities into the VPC This is the entry point from the DMZ ( for example inbound services VPC through AWS PrivateLink for customers interface ) As such access here is focused on the services that the end users need for their Digital Brand Experience BSS subnet The BSS subnet holds the primary computing resources These comprise different Auto Scaling groups managed by Amazon Elastic Kubernetes Service (Amazon EKS) • Business Access Layer ( BAL) nodes – Used for API access path based routing metrics and throttling to support the Digital Brand Experience Suite APIs These capabilities are provided by the APIMAN package These nodes support inherent SLAs and enable customers to set throttling rules based on the number of requests per second for each method in APIs • Enterprise Service Bus ( ESB) nodes – Implement the Digital Brand Experience Suite SaaS APIs which are organized into microservices based on functional areas (for example account management shopping cart and i nvoicing ) These APIs and their integration logic translate between the high level service oriented requests received by the Digital Brand Experience Suite APIs and the low level technical APIs needed to fulfill the requests across the various Digital Brand Experience Suite resources • Bill Processing ( BP) batch nodes – Run the billing applications which perform bill calculation invoice generation collections and journal processing These applications are taskbased meaning that they are initiat ed on a schedule and on a particular set of input data For example bill processing for cycle 15 will run on the determined day ( for example the fifteenth day of the month) for the subset of accounts who have selected the fifteenth day as their bill cycle date By using native auto scaling BP batch nodes dynamically scale Amazon Elastic Compute Cloud (Amazon EC2) instances based on configurable parameters (such as the number of customers services and products ) and is one of the major benefits of running the application on AWS With AWS Auto Scaling BP batch application s always have the right resources at the right time This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 15 • BSS nodes – Host the low level service APIs which expose the billing capabilities to the Integra tion layer For example fetching the invoice details from processed bills or inquiring about a particular collections’ scenario • Business Integration Layer ( BIL) nodes – Contain applications to support the middleware —the shopping cart application Red Hat Decision Manager (RHDM) which is used to extend the BIL API business logic and RedHat Process Automation Manager (RHPAM) which is used for case handling and post cart processing ( for example credit review) Using each of these di fferent node groups highly depends on the traffic profiles of the specific operator; as a result deploying these node groups into separate Auto Scaling groups allows for greater platform efficiency by scaling the specific node group accordingly AWS Fargate is used for BP batch which comprise s of scheduled and taskbased applications like the billing processor and invoice generator Rather than port these applications Fargate is used to containe rize them while maintaining their established technology stack An Amazon Elastic File System (Amazon EFS) instance is deployed within this subnet that is used by the various processes of the billing application (for example usage files which are shared between the different usage file rating processes) As part of the overall migration of the Digital Brand Experience Suite solution to be more AWS native several processes have already moved to use serverless computing resources For example the payment gateway and web UI backend are implemented through AWS Lambda functions for event based handling Serverless computing on AWS —such as AWS Lambda —includes automatic scaling built in high availability and a payforvalue billing model AWS Lambda is an event driven compute servic e that enables customer to run code in response to events from over 200 natively integrated AWS and SaaS sources —all without managing any servers Internal Amdocs operations and support users access BSS subnet from the management VPC through PrivateLink for Amdocs interface s PrivateLink provides private connectivity between VPCs AWS services and customer’s onpremises networks without exposing their traffic to the public internet Database subnet The database subnet holds the resources for the Digital Brand Experience Suite persistence layer (such as multiple database technologies ) that are used across the This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 16 Digital Brand Experience Suite SaaS solution The BIL database and BSS database use Amazon Aurora databases for commerce (shopping cart) and billing respectively Database resources are only accessible from the BSS subnet Not only does this secure the actual persisted data but it decouples the storage technology from the external services and hides storage details like database schemas from the end users This all ows the solution to evolve over time and introduce and update storage technology while minimizing the impact on the rest of the solution and its users External services integration Interface VPC endpoints are used to securely access various AWS services such as Amazon CloudWatch Amazon Simple Storage Service (Amazon S3) Amazon Elastic Container Registry (Amazon ECR) and AWS Systems Manager VPC endpoints allows communication between instances and database in customer VPCs and management services such as CloudWatch and Systems Manager without imposing availability risks and bandwidth constraints on network traffic High availability The following diagram d epicts how Digital Brand Experience Suite can be deployed in multiple Availability Zones (AZs) configuration to promote high availability This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 17 Digital Brand Experience Suite high availability in AWS Digital Brand Experience Suite architecture on AWS is highly available The solution is built across a minimum of two Availability Zones All Availability Zones in an AWS Region are interconnected with high bandwidth low latency networking Availability Zones are physi cally separated by a meaningful distance although all are within 100 km (60 miles) of each other If one of the Availability Zones becomes unavailable the application continues to stay available because the architecture is high ly available in all layer s—databases utilizing multi AZ set up as well as Kubernetes spreads the pods in a deployment across nodes and multiple Availability Zones —and impact of an Availability Zone failure is mitigated Digital Brand Experience Suite architecture on AWS supports Cluster Autoscaling as well as Horizontal Pod Autoscaling and it adjusts the size of Amazon EKS cluster by adding or removing worker nodes in multiple Availability Zones In addition applicat ion components are stateless and based on containers with Elastic Load Balancing with native awareness of failure boundaries like A vailability Zones to keep your applications available across a Region without requiring Global Server Load Balancing This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 18 Scalability The solution is fully s calab le using Auto Scaling groups of various container types This allows for more fine grained scalability as the various compute needs change over time Auto Scal ing groups can be configured with different scaling models either scaling up or down based on events system measurements or a preset schedule Digital Brand Experience Suite architecture uses Amazon Aurora a MySQL and PostgreSQL –compatible relational database built for the cloud Amazon Aurora scales in many ways including storage instance and read scaling The a pplication also uses Couchbase on Amazon EC2 setting up Couchbase in a way that makes it scalable Security Access management The access is following rolebased access control through AWS Identity and Access Management (IAM) The solution has defined roles based on who needs access to what As a best practice customers could assign permissions at IAM group role level to access applications in the spec ific VPCs and never grant privileges beyond the minimum required for a user or group to fulfill their job requirements The list of roles and groups change with each project Secure data at rest Data at rest will be encrypted on the storage volume level (using AWS built in capabilities) as well as on the database level (on configurable PII fields) Digital Brand Experience Suite architecture us es AWS Key Management Service (AWS KMS) to create and control t he encryption keys and makes it easy for customers to create and manage cryptographic keys and control their use across a wide range of AWS services and applications Encryption is applied by solution components and AWS services Decryption is applied by each data consumer Secure data in transit The w eb UIs access will be encrypted with SSL encryption (HTTPS) The solution API layer access will be encrypted with SSL encryption (HTTPS) Additionally the encryption keys will be stored in AWS KMS The system credentials will be securely stored in AWS Secrets Manager Automated clearing house and credit card data will be tokenized by purchaser’s paym ent gateway system and the solution store s the credit card token only This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 19 Digital Brand Experience Suite SaaS model The following diagram provides a high level network layout view identifying the three major VPCs configured Digital Brand Expe rience Suite SaaS overall view This diagram also addresses the two primary means of accessing the solution: end customer and user access by the inbound services VPC and Amdocs operations access by the management VPC Both methods can then access the common resources in the Digital Brand Experience Suite SaaS VPC End customer and user access is secured by AWS Shield Advanced to provide managed distributed denial of service (DDo S) protection and AWS Web Application Firewall (AWS WAF) to protect their application from common web exploits In addition Amazon CloudFront is deployed in front of the Amazon S3 buckets used to host the web UI application client for download This improves initial application download performance by placing the application closer to the user This layout is more tailored to SaaS offerings because it provides two main access channels: individual tenant and global operations Non SaaS cloud offerings employ a different network architecture Amazon S3 – Web UI AWS PrivateLink – For customersRegion AWS PrivateLink – For Amdocs platform Amdocs data center Users Amazon Route 53 – Public Hosted zone amdocsoptimacloud AWS WAF Amazon CloudFront Download distribution VPC Inbound Services VPC –Amdocs platform SaaS VPC Management AWS Direct ConnectAWS Shield Advanced This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 20 Inbound services VPC (SaaS Offering) The following diagram provides more detail on the inbound services VPC Digital Brand Experience Suite SaaS inbound services VPC detail The public DMZ subnet is the approachable point for all users —it primarily provides authentication services so that further secured services can be accessed To protect the solution from mal icious attacks such as DDoS AWS WAF and AWS Shield are deployed Management VPC (SaaS offering) The following diagram provides more detail on the management VPC Amazon S3 – Web UI VPC AWS PrivateLink – For customersDMZ Amdocs Digital Brand Experience Amazon Route 53 – Private Hosted zone amdocsoptimacloud Amazon Route 53 – Public Hosted zone amdocsoptimacloud AWS WAFAmazon CloudFront – Download distribution AWS Shield Advanced Internet gateway Network Load Balancer – Amdocs Platform Security group Elastic network interface This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 21 Digital Brand Experience Suite SaaS management VPC detail The resources within the private management subnet provide access to Digital Brand Experience Suite SaaS for the operations engineers Microsoft Windows instances in Amazon EC2 as Bastion instance are running i n the private management VPC Operations engineers can use the Remote Desktop Protocol to administrate and access the compute resources inside the VPC remotely PrivateLink is also used to connect services across accounts and VPCs without exposing the traffic to the public interne t AWS Well Architected Framework The AWS Well Architected Framework helps cloud architects build secure high performing resilient and efficient infrastructure for their applications and workloads The AWS Well Architected Framework is based on five pillars : •Operational excellence •Security •Reliability •Performance efficie ncy •Cost optimization Amazon S3 – Web UI VPC Management Amazon Route 53 – Private Hosted zone amdocsoptimacloud Internet gateway Security group Security group Elastic network interface Security group Endpoints AWS PrivateLink – For Amdocs platform Amazon S3 Windows BastionThis version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 22 AWS Well Architected provides a consistent approach for customers and partners to evaluate architectures and implement designs that can scale over time The AWS Well Architected Framework helped Amdocs to adapt best practices and to achieve an optimized architecture of their Digital Bran d Experience Suite on AWS The following is an overview of the five pillars of the AWS Well Architected Framework with reference to the Digital Brand Experience Suite architecture on AWS Operational excellence This pillar focuses on t he ability to run and monitor systems to deliver business value and continually improve supporting processes and procedures Digital Brand Experience Suite architecture on AWS has the ability to support development and run workloads effectively The application gains insight s into th e operations aspects by using CloudWatch to collect metrics send alarms monitor Amazon Aurora metrics and use CloudWatch Container Insights from an Amazon EKS cluster The application uses AWS Lambda to respond to operational events automate changes and continuously manage and improve processes to deliver a business value Customers can find prescriptive guidance on implementation in the Operational Excellence Pillar whitepaper Security This pillar focuses on the ability to protect information systems and assets while delivering busine ss value through ri sk assessments and mitigation strategies Digital Brand Experience Suite architecture on AWS takes advantage of inherent prevention features such as : •Amazon VPCs to logically isolate environments per customer requirements •Subnets to logically isolate multiple layers in VPC and control the communication between them •Network access control lists and security groups to control incoming and outgoing traffic Digital Bran d Experience Suite uses AWS KMS for security of data at rest SSL encryption for data in transit as well as Secrets Manager for systems credential management rolebased access control through IAM for access management Customers can find prescrip tive guidance on implementation in the Security Pillar whitepaper This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 23 Reliability This pillar focuses on the ability of a system to recover from infrastructure or service failures to dynamically acquire computing resources to meet demand and to mitigate disruptions such as misconfigurations or transient network issues Digital Brand Experience Suite quickly rec overs from database failure by using Amazon Aurora which spans across multiple Availability Zones in an AWS Region and each Availability Zone contains a copy of the cluster volume data This functionality means that database cluster can tolerate a failure of an Availability Zone without any loss of data Digital Brand Experience Suite on AWS supports Cluster Autoscaling as well as Horizontal Pod Autoscaling handling scalability and reliability of application Changes are made through automation using AWS CloudFormation The architecture of Digital Brand Experience Suite on AWS encompasses the ability to perform its intended function correctly and consistently when it’s expected to This includes the abil ity to operate and test the workload through its total lifecycle Customers can find prescriptive guidance on implementation in the Reliability Pillar whitepaper Performance efficiency This pillar deals with the ability to use computing resources efficiently to meet system requirements an d to maintain that efficiency as demand changes and technologies evolve The architecture of Digital Brand Experience Suite on AWS ensures an efficient usage of the compute storage and database resources to meet system requirements and to maintain th em as demand changes and technologies evolve Customers can find prescriptive guidance on implementation in the Performance Efficiency Pillar whitepa per Cost optimization This pillar deals with the ability to avoid or eliminate unneeded cost or suboptimal resources Digital Brand Experience Suite on AWS uses Amazon Aurora PostgreSQL which considerable reduces database costs Amazon Aurora PostgreSQL is three times faster than standard PostgreSQL databases It provides the security availabili ty and reliability of commercial databases at onetenth the cost Additionally Digital Brand Experience Suite on AWS supports Cluster Autoscaling as well as Horizontal Pod Autoscaling contributing to considerable cost reduction The architecture of Digi tal Brand Experience Suite on AWS has the ability to run systems to deliver business value at the lowest price point This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 24 Customers can find prescriptive guidance on implementation in the Cost Optimization Pillar whitepaper Conclusion Amdocs Digital Brand Experience Suite is a pre integrated complete digital customer management and commerce platform designed to rapidly and securely monetize any product or service The richness of Amdocs Digital Brand Experience Suite ’s capabilities and flexibility —a strong BSS engine enabled by modern digital open source components such as JBoss Fuse REST APIs React Nodejs and other advanced technologies —enable s customers to enjoy the superior performance of a wellproven solution Amdocs Digital Brand Experience Suite combine s the effectiveness of a lean archi tecture and future readiness to provide customers the ability to step into the digital economy By deploying Amdocs Digital Brand Experience Suite in the AWS Cloud customers c an increase deployment velocity reduce infrastructure cost significantly and i ntegrate with IoT analytics and machine learning services Customers can further use the compliance benefits of the AWS Cloud for sensitive customer data AWS is the costeffective secure scalable high performing and flexible option for deploying Amdocs Digital Brand Experience Suite BSS Contributors Contributors to this document include : •David Sell Lead Software Architect Amdocs Digital Brand Experience Amdocs •Shahar Dumai Head of marketing for Amdocs Digital Brand Experience Amdocs •Efrat NirBerger Sr Partner Solutions Architect OSS/BSS Amazon Web Services •Visu Sontam Sr Partner Solutions Architect OSS/BSS Amazon Web Services •Mounir Chennana Solutions Architect Amazon Web ServicesThis version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 25 Further reading For additional information see: •5G Network Evolution with AWS whitepaper •Continuous Integration and Continuous Delivery for 5G Networks on AWS whitepaper •NextGeneration Mobile Private Network s Powered by AWS whitepaper •AWS Well Architected Framework whitepaper •NextGeneration OSS with AWS whitepaper Document revisions Date Description November 18 2021 Updated for technical accuracy February 2018 First publication
General
Overview_of_AWS_Security__Application_Services
ArchivedOverview of AWS Security Application Services June 2016 (Please c onsul t http://aws amazon com/se curity / for the latest versi on of this paper) THIS PAPER HAS BEEN ARCHIVED For the latest technical content see https://docsawsamazoncom/security/Archived Page 2 of 9 © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’ current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’ products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Page 3 of 9 Application Services Amazon Web Services offers a variety of managed services to use with your applications including services that provide application streaming queueing push notification email delivery search and transcoding Amazon CloudSearch Security Amazon Cloud Search is a managed service in the cloud that makes it easy to set up manage and scale a search solution for your website Amazon CloudSearch enables you to search large collections of data such as web pages document files forum posts or product infor mation It enables you to quickly add search capabilities to your website without having to become a search expert or worry about hardware provisioning setup and maintenance As your volume of data and traffic fluctuates Amazon CloudSearch automatically scales to meet your needs An Amazon CloudSearch domain encapsulates a collection of data you want to search the search instances that process your search requests and a configuration that controls how your data is indexed and searched You create a se parate search domain for each collection of data you want to make searchable For each domain you configure indexing options that describe the fields you want to include in your index and how you want to use them text options that define domain specific stopwords stems and synonyms rank expressions that you can use to customize how search results are ranked and access policies that control access to the domain’s document and search endpoints Access to your search domain's endpoints is restricted by IP address so that only authorized hosts can submit documents and send search requests IP address authorization is used only to control access to the document and search endpoints All Amazon CloudSearch configuration requests must be authenticated using standard AWS authentication Amazon CloudSearch provides separate endpoints for accessing the configuration search and document services: • You use the configuration service to create and manage your search domains The region specific configuration serv ice endpoints are of the form: cloudsearchregionamazonawscom For example cloudsearchus east 1amazonawscom For a current list of supported regions see Regions and Endpoints in the AWS General Reference The document service endpoint is used to submit documents to the domain for indexing and is accessed through a domain specific endpoint: http://doc domainname domainidus east1cloudsearchamazonawscom • The search endpoint is used to submit search requests to the domain and is accessed through a domain specific endpoint: http ://search domainname domain iduseast 1cloudsearchamazonawscom Note that if you do not have a static IP address you must re authorize your computer whenever your IP address changes If your IP address is assigned dynamically it is also likely that you're sharing that address with other computers on your network This means that when you authorize the IP address all computers that share it will be able to access your search domain's document service endpoint Archived Page 4 of 9 Like all AWS Services Amazon CloudSearch requires that every request made to its control API be authenticated so only authenticated users can access and manage your Clou dSearch domain API requests are signed with an HMAC SHA1 or HMAC SHA256 signature calculated from the request and the user’s AWS Secret Access key Additionally the Amazon CloudSearch control API is accessible via SSL encrypted endpoints You can control access to Amazon CloudSearch management functions by creating users under your AWS Account using AWS IAM and controlling which CloudSearch operations these users have permission to perform Amazon Simple Queue Service (Amazon SQS) Security Amazon SQS is a highly reliable scalable message queuing service that enables asynchronous message based communication between distributed components of an application The components can be computers or Amazon EC2 instances or a combination of both With Amazon SQS you can send any number of messages to an Amazon SQS queue at any time from any component The messages can be retrieved from the same component or a different one right away or at a later time (within 14 days) Messages are highly durable; each message is persistently stored in highly available highly reliable queues Multiple processes can read/write from/to an Amazon SQS queue at the same time without interfering with each other Amazon SQS access is granted based on an AWS Account or a user created wi th AWS IAM Once authenticated the AWS Account has full access to all user operations An AWS IAM user however only has access to the operations and queues for which they have been granted access via policy By default access to each individual queue i s restricted to the AWS Account that created it However you can allow other access to a queue using either an SQS generated policy or a policy you write Amazon SQS is accessible via SSL encrypted endpoints The encrypted endpoints are accessible from both the Internet and from within Amazon EC2 Data stored within Amazon SQS is not encrypted by AWS; however the user can encrypt data before it is uploaded to Amazon SQS provided that the application utilizing the queue has a means to decrypt the messag e when retrieved Encrypting messages before sending them to Amazon SQS helps protect against access to sensitive customer data by unauthorized persons including AWS Amazon Simple Notification Service (Amazon SNS) Security Amazon Simple Notification Ser vice (Amazon SNS) is a web service that makes it easy to set up operate and send notifications from the cloud It provides developers with a highly scalable flexible and cost effective capability to publish messages from an application and immediately deliver them to subscribers or other applications Amazon SNS provides a simple web services interface that can be used to create topics that customers want to notify applications (or people) about subscribe clients to these Archived Page 5 of 9 topics publish messages and have these messages delivered over clients’ protocol of choice (ie HTTP/HTTPS email etc) Amazon SNS delivers notifications to clients using a “push” mechanism that eliminates the need to periodically check or “poll” for new information and updates Amazon SNS can be leveraged to build highly reliable event driven workflows and messaging applications without the need for complex middleware and application management The potential uses for Amazon SNS include monitoring applications workflow systems time sensitive information updates mobile applications and many others Amazon SNS provides access control mechanisms so that topics and messages are secured against unauthorized access Topic owners can set policies for a topic that restrict who can p ublish or subscribe to a topic Additionally topic owners can encrypt transmission by specifying that the delivery mechanism must be HTTPS Amazon SNS access is granted based on an AWS Account or a user created with AWS IAM Once authenticated the AWS Account has full access to all user operations An AWS IAM user however only has access to the operations and topics for which they have been granted access via policy By default access to each individual topic is restricted to the AWS Account that crea ted it However you can allow other access to SNS using either an SNS generated policy or a policy you write Amazon Simple Workflow Service (Amazon SWF) Security The Amazon Simple Workflow Service (SWF) makes it easy to build applications that coordina te work across distributed components Using Amazon SWF you can structure the various processing steps in an application as “tasks” that drive work in distributed applications and Amazon SWF coordinates these tasks in a reliable and scalable manner Amaz on SWF manages task execution dependencies scheduling and concurrency based on a developer’s application logic The service stores tasks dispatches them to application components tracks their progress and keeps their latest state Amazon SWF provides simple API calls that can be executed from code written in any language and run on your EC2 instances or any of your machines located anywhere in the world that can access the Internet Amazon SWF acts as a coordination hub with which your application ho sts interact You create desired workflows with their associated tasks and any conditional logic you wish to apply and store them with Amazon SWF Amazon SWF access is granted based on an AWS Account or a user created with AWS IAM All actors that participate in the execution of a workflow —deciders activity workers workflow administrators —must be IAM users under the AWS Account that owns the Amazon SWF resources You cannot grant users associated with other AWS Accounts access to your Amazon SWF workflows An AWS IAM user however only has access to the workflows and resources for which they have been granted access via policy Amazon Simple Email Service (Amazon SES) Security Archived Page 6 of 9 Amazon Simple Email Service (SES) is an outbound only emailsending service b uilt on Amazon’s reliable and scalable infrastructure Amazon SES helps you maximize email deliverability and stay informed of the delivery status of your emails Amazon SES integrates with other AWS services making it easy to send emails from applications being hosted on services such as Amazon EC2 Unfortunately with other email systems it's possible for a spammer to falsify an email header and spoof the originating email address so that it appears as though the email originated from a different sourc e To mitigate these problems Amazon SES requires users to verify their email address or domain in order to confirm that they own it and to prevent others from using it To verify a domain Amazon SES requires the sender to publish a DNS record that Amazo n SES supplies as proof of control over the domain Amazon SES periodically reviews domain verification status and revokes verification in cases where it is no longer valid Amazon SES takes proactive steps to prevent questionable content from being sent so that ISPs receive consistently high quality email from our domains and therefore view Amazon SES as a trusted email origin Below are some of the features that maximize deliverability and dependability for all of our senders: • Amazon SES uses cont entfiltering technologies to help detect and block messages containing viruses or malware before they can be sent • Amazon SES maintains complaint feedback loops with major ISPs Complaint feedback loops indicate which emails a recipient marked as spam Amazon SES provides you access to these delivery metrics to help guide your sending strategy • Amazon SES uses a variety of techniques to measure the quality of each user’s sending These mechanisms help identify and disable attempts to use Amazon SES for u nsolicited mail and detect other sending patterns that would harm Amazon SES’s reputation with ISPs mailbox providers and anti spam services • Amazon SES supports authentication mechanisms such as Sender Policy Framework (SPF) and DomainKeys Identified Mail (DKIM) When you authenticate an email you provide evidence to ISPs that you own the domain Amazon SES makes it easy for you to authenticate your emails If you configure your account to use Easy DKIM Amazon SES will DKIM sign your emails on your b ehalf so you can focus on other aspects of your email sending strategy To ensure optimal deliverability we recommend that you authenticate your emails As with other AWS servi ces you use securit y credentia ls to verify who you are and whether you have perm ission to interact with Amazon SES For information about whic h credentials to use see Using Credentials with Amazon SES Amazon SES also integrates with AWS IAM so that you can specif y which Amazon SES API actions a user can perform If you choose to communicate with Amazon SES through its SMTP interface you are required to encrypt your connection using TLS Amazon SES supports two mechanisms for establishing a TLS encrypted connection: STARTTLS and TLS Wrapper If you choose to commu nicate with Amazon SES over HTTP then all communication will be protected by TLS through Amazon SES’s HTTPS endpoint When delivering email to its final Archived Page 7 of 9 destination Amazon SES encrypts the email content with opportunistic TLS if supported by the receiver Amazon Elastic Transcoder Service Security The Amazon Elastic Transcoder service simplifies and automates what is usually a complex process of converting media files from one format size or quality to another The Elastic Transcoder service converts standard definition (SD) or high definition (HD) video files as well as audio files It reads input from an Amazon S3 bucket transcodes it and writes the resulting file to another Amazon S3 bucket You can use the same bucket for input and output and the buckets can be in any AWS region The Elastic Transcoder accepts input files in a wide variety of web consumer and professional formats Output file types include the MP3 MP4 OGG TS WebM HLS using MPEG 2 TS and Smooth Streaming using fmp4 container types storing H264 or VP8 video and AAC MP3 or Vorbis audio You'll start with one or more input files and create transcoding jobs in a type of workflow called a transcoding pipeline for each file When you create the pipeline you'll specify input and output buckets as well as an IAM role Eac h job must reference a media conversion template called a transcoding preset and will result in the generation of one or more output files A preset tells the Elastic Transcoder what settings to use when processing a particular input file You can specify many settings when you create a preset including the sample rate bit rate resolution (output height and width) the number of reference and keyframes a video bit rate some thumbnail creation options etc A best effort is made to start jobs in the order in which they’re submitted but this is not a hard guarantee and jobs typically finish out of order since they are worked on in parallel and vary in complexity You can pause and resume any of your pipelines if necessary Elastic Transcoder supports the use of SNS notifications when it starts and finishes each job and when it needs to tell you that it has detected an error or warning condition The SNS notification parameters are associated with each pipeline It can also use the List Jobs By Status function to find all of the jobs with a given status (eg "Completed") or the Read Job function to retrieve detailed information about a particular job Like all other AWS services Elastic Transcoder integrates with AWS Identity and Access Management (IAM) which allows you to control access to the service and to other AWS resources that Elastic Transcoder requires including Amazon S3 buckets and Amazon SNS topics By default IAM users have no access to Elastic Transcoder or to the resources that it uses If you want IAM users to be able to work with Elastic Transcoder you must explicitly grant them permissions Amazon Elastic Transcoder requires every request made to its control API be authenticated so only authenticated processes or users can create modify or delete their own Amazon Transcoder pipelines and presets Requests are signed with an HMAC SHA256 signature calculated from the request and a key derived from the user’s secret key Additionally the Amazon Elastic Transcoder API is only accessible via SSL encrypted endpoints Durability is provided by Amazon S3 where media files are redundantly stored on multiple devices across multiple facilities in an Amazon S3 region For added protection against Archived Page 8 of 9 users acciden tly deleting media files you can use the Versioning feature in Amazon S3 to preserve retrieve and restore every version of every object stored in an Amazon S3 bucket You can further protect versions using Amazon S3 Versioning's MFA Delete feature Once enabled for an Amazon S3 bucket each version deletion request must include the six digit code and serial number from your multi factor authentication device Amazon AppStream Security The Amazon AppStream service provides a framework for running streami ng applications particularly applications that require lightweight clients running on mobile devices It enables you to store and run your application on powerful parallel processing GPUs in the cloud and then stream input and output to any client device This can be a pre existing application that you modify to work with Amazon AppStream or a new application that you design specifically to work with the service The Amazon AppStream SDK simplifies the development of interactive streaming applications and client applications The SDK provides APIs that connect your customers’ devices directly to your application capture and encode audio and video stream content across the Internet in near real time decode content on client devices and return user inpu t to the application Because your application's processing occurs in the cloud it can scale to handle extremely large computational loads Amazon AppStream deploys streaming applications on Amazon EC2 When you add a streaming application through the AWS Management Console the service creates the AMI required to host your application and makes your application available to streaming clients The service scales your application as needed within the capacity limits you have set to meet demand Clients usi ng the Amazon AppStream SDK automatically connect to your streamed application In most cases you’ll want to ensure that the user running the client is authorized to use your application before letting him obtain a session ID We recommend that you use some sort of entitlement service which is a service that authenticates clients and authorizes their connection to your application In this case the entitlement service will also call into the Amazon AppStream REST API to create a new streaming session for the client After the entitlement service creates a new session it returns the session identifier to the authorized client as a single use entitlement URL The client then uses the entitlement URL to connect to the application Your entitlement service can be hosted on an Amazon EC2 instance or on AWS Elastic Beanstalk Amazon AppStream utilizes an AWS CloudFormation template that automates the process of deploying a GPU EC2 instance that has the AppStream Windows Application and Windows Client SDK libraries installed; is configured for SSH RDC or VPN access; and has an elastic IP address assigned to it By using this template to deploy your standalone streaming server all you need to do is up load your application to the server and run the command to launch it You can then use the Amazon AppStream Service Simulator tool to test your application in standalone mode before deploying it into production Amazon AppStream also utilizes the STX Protocol to manage the streaming of your application from AWS to local devices The Amazon AppStream STX Protocol is a proprietary protocol used to stream high quality application video over varying network conditions; it monitors ne twork Archived Page 9 of 9 conditions and automatically adapts the video stream to provide a low latency and high resolution experience to your customers It minimizes latency while syncing audio and video as well as capturing input from your customers to be sent back to the a pplication running in AWS Further Reading https://awsamazoncom/security/security resources/ Introduction to AWS Security Processes Overview of AWS Security Storage Services Overview of AWS Security Database Services Overview of AWS Security Compute S ervices Overview of AWS Security Application Services Overview of AWS Security Analytics Mobile and Application Services Overview of AWS Security – Network Services
General
Architecting_for_Genomic_Data_Security_and_Compliance_in_AWS
ArchivedArchitecting for Genomic Data Security and Compliance in AWS Working with ControlledAccess Datasets from dbGaP GWAS and other IndividualLevel Genomic Research Repositories Angel Pizarro Chris Whalley December 2014 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 2 of 17 Table of Contents Overview 3 Scope 3 Considerations for Genomic Data Privacy and Security in Human Research 3 AWS Approach to Shared Security Responsibilities 4 Architecting for Compliance with dbGaP Security Best Practices in AWS 5 Deployment Model 6 Data Location 6 Physical Server Access 7 Portable Storage Media 7 User Accounts Passwords and Access Control Lists 8 Internet Networking and Data Transfers 9 Data Encryption 11 File Systems and Storage Volumes 13 Operating Systems and Applications 14 Auditing Logging and Monitoring 15 Authorizing Access to Data 16 Cleaning Up Data and Retaining Results 17 Conclusion 17 ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 3 of 17 Overview Researchers who plan to work with genomic sequence data on Amazon Web Services (AWS) often have questions about security and compliance; specifically about how to meet guidelines and best practices set by government and grant funding agencies such as the National Institutes of Health In this whitepaper we review the current set of guidelines and discuss which services from AWS you can use to meet particular requirements and how to go about evaluating those services Scope This whitepaper focuses on common issues raised by Amazon Web Services (AWS) customers about security best practices for human genomic data and controlled access datasets such as those from National Institutes of Health (NIH) repositories like Database of Genotypes and Phenotypes (dbGaP) and genomewide association studies (GWAS) Our intention is to provide you with helpful guidance that you can use to address common privacy and security requirements However we caution you not to rely on this whitepaper as legal advice for your specific use of AWS We strongly encourage you to obtain appropriate compliance advice about your specific data privacy and security requirements as well as applicable laws relevant to your human research projects and datasets Considerations for Genomic Data Privacy and Security in Human Research Research involving individuallevel genotype and phenotype data and deidentified controlled access datasets continues to increase The data has grown so fast in volume and utility that the availability of adequate data processing storage and security technologies has become a critical constraint on genomic research T he global research community is recognizing the practical benefits of the AWS cloud and scientific investigators institutional signing officials IT directors ethics committees and data access committees must answer privacy and security questions as they evaluate the use of AWS in connection with individuallevel genomic data and other controlled access datasets Some common questions include: Are data protected on secure servers? Where are data located? How is access to data controlled? Are data protections appropriate for the Data Use Certification? These considerations are not new and are not cloudspecific Whether data reside in an investigator lab an institution al network an agencyhosted data repository or within the AWS cloud the essential considerations for human genomic data are the same You must correctly implement data protection and security controls in the system by first defining the system requirements and then architecting the system security controls to meet those requirements particularly the shared responsibilities amongst the parties who use and maintain the system ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 4 of 17 AWS Approach to Shared Security Responsibilities AWS delivers a robust web services platform with features that enable research teams around the world to create and control their own private area in the AWS cloud so they can quickly build install and use their data analysis applications and data stores without having to purchase or maintain the necessary hardware and facilities As a researcher you can create your private AWS environment yourself using a selfservice signup process that establishes a unique AWS account ID creates a root user account and account ID and provides you with access to the AWS Management Console and Application Programming Interfaces (APIs) allowing control and management of the private AWS environment Because AWS does not access or manage your private AWS environment or the data in it you retain responsibility and accountability for the configuration and security controls you implement in your AWS account This customer accountability for your private AWS environment is fundamental to understanding the respective roles of AWS and our customers in the context of data protections and security practices for human genomic data Figure 1 depicts the AWS Shared Responsibility Model Figure 1 Shared Responsibility Model In order to deliver and maintain the features available within every customer ’s private AWS environment AWS works vigorously to enhance the security features of the platform and ensure that the feature delivery operations are secure and of high quality AWS defines quality and security as confidentiality integrity and availability of our services and AWS seeks to provide researchers with visibility and assurance of our quality and security practices in four important ways ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 5 of 17 First AWS infrastructure is designed and managed in alignment with a set of internationally recognized security and quality accreditations standards and bestpractices including industry standards ISO 27001 ISO 9001 AT 801 and 101 (formerly SSAE 16) as well as government standards NIST FISMA and FedRAMP Independent third parties perform accreditation assessments of AWS These third parties are auditing experts in cloud computing environments and each brings a unique perspective from their compliance backgrounds in a wide range of industries including healthcare life sciences financial services government and defense and others Because each accreditation carries a unique audit schedule including continuous monitoring AWS security and quality controls are constantly audited and improved for the benefit of all AWS customers including those with dbGaP HIPAA and other health data protection requirements Second AWS provides transparency by making these ISO SOC FedRAMP and other compliance reports available to customers upon request Customers can use these reports to evaluate AWS for their particular needs You can request AWS compliance reports at https://awsamazoncom/compliance/contact and you can find more information on AWS compliance certifications customer case studies and alignment with best practices and standards at the AWS compliance website http://awsamazoncom/compliance/ Third as a controlled US subsidiary of Amazoncom Inc Amazon Web Services Inc participates in the Safe Harbor program developed by the US Department of Commerce the European Union and Switzerland respectively Amazoncom and its controlled US subsidiaries have certified that they adhere to the Safe Harbor Privacy Principles agreed upon by the US the EU and Switzerland respectively You can view the Safe Harbor certification for Amazoncom and its control led US subsidiaries on the US Department of Commerce’s Safe Harbor website The Safe Harbor Principles require Amazon and its controlled US subsidiaries to take reasonable precautions to protect the personal information that our customers give us in order to create their account This certification is an illustration of our dedication to security privacy and customer trust Lastly AWS respects the rights of our customers to have a choice in their use of the AWS platform The AWS Account Management Console and Customer Agreement are designed to ensure that every customer can stop using the AWS platform and export all their data at any time and for any reason This not only helps customers maintain control of their private AWS environment from creation to deletion but it also ensures that AWS must continuously work to earn and keep the trust of our customers Architecting for Compliance with dbGaP Security Best Practices in AWS A primary principle of the dbGaP security best practices is that researchers should download data to a secure computer or server and not to unsecured network drives or servers1 The remainder of the dbGaP security best practices can be broken into a set of three IT security control domains that you must address to ensure that you meet the primary principle: 1 http://wwwncbinlmnihgov/projects/gap/pdf/dbgap_2b_security_procedurespdf ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 6 of 17  Physical Security refers to both physical access to resources whether they are located in a data center or in your desk drawer and to remote administrative access to the underlying computational resources  Electronic Security refers to configuration and use of networks servers operating systems and applicationlevel resources that hold and analyze dbGaP data  Data Access Security refers to managing user authentication and authorization of access to the data how copies of the data are tracked and managed and having policies and processes in place to manage the data lifecycle Within each of these control domains are a number of control areas which are summarized in Table 1 Table 1 Summary of dbGaP Security Best Practices Control Domain Control Areas Physical Security Deployment Model Data Location Physical Server Access Portable Storage Media Electronic Security User Accounts Passwords and Access Control Lists Internet Networking and Data Transfers Data Encryption File Systems and Storage Volumes Operating Systems and Applications Auditing Logging And Monitoring Data Access Security Authorizing Access to Data Cleaning Up Data and Retaining Results The remainder of this paper focuses on the control areas involved in architecting for security and compliance in AWS Deployment Model A basic architectural consideration for dbGaP compliance in AWS is determining whether the system will run entirely on AWS or as a hybrid deployment with a mix of AWS and nonAWS resources This paper focus es on the control areas for the AWS resources If you are architecting for hybrid deployments you must also account for your nonAWS resources such as the local workstations you might download data to and from your AWS environment any institutional or external networks you connect to your AWS environment or any thirdparty applications you purchase and install in your AWS environment Data Location The AWS cloud is a globally available platform in which you can choose the geographic region in which your data is located AWS data centers are built in clusters in various global regions AWS calls these data center clusters Availability zones (AZs) As of December 2014 AWS ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 7 of 17 maintains 28 AZs organized into 11 regions globally As an AWS customer you can choose to use one region all regions or any combination of regions using builtin features available within the AWS Management Console AWS regions and Availability Zones ensure that if you have locationspecific requirements or regional data privacy policies you can establish and maintain your private AWS environment in the appropriate location You can choose to replicate and back up content in more than one region but you can be assured that AWS does not move customer data outside the region(s) you configure Physical Server Access Unlike traditional laboratory or institutional server systems where researchers install and control their applications and data directly on a specific physical server the applications and data in a private AWS account are decoupled from a specific physical server This decoupling occurs through the builtin features of the AWS Foundation Services layer (see Figure 1 Shared Responsibility Model ) and is a key attribute that differentiates the AWS cloud from traditional server systems or even traditional server virtualization Practically this means that every resource (virtual servers firewalls databases genomic data etc) within your private AWS environment is reduced to a single set of software files that are orchestrated by the Foundational Services layer across multiple physical servers Even if a physical server fails your private AWS resources and data maintain confidentiality integrity and availability This attribute of the AWS cloud also adds a significant measure of security because even if someone were to gain access to a single physical server they would not have access to all the files needed to recreate the genomic data within the your private AWS account AWS owns and operates its physical servers and network hardware in highlysecure state of theart data centers that are included in the scope of independent thirdparty security assessments of AWS for ISO 27001 Service Organization Controls 2 (SOC 2) NIST’s federal information system security standards and other security accreditations Physical access to AWS data centers and hardware is based on the least privilege principle and access is authorized only for essential personnel who have experience in cloud computing operating environments and who are required to maintain the physical environment When individuals are authorized to access a data center they are not given logical access to the servers within the data center When anyone with data center access no longer has a legitimate need for it access is immediately revoked even if they remain an employee of Amazon or Amazon Web Services Physical entry into AWS data centers is controlled at the building perimeter and ingress points by professional security staff who use video surveillance intrusion detection systems and other electronic means Authorized staff must pass twofactor authentication a minimum of two times to enter data center floors and all physical access to AWS data centers is logged monitored and audited routinely Portable Storage Media The decision to run entirely on AWS or in a hybrid deployment model has an impact on your system security plans for portable storage media Whenever data are downloaded to a portable device such as a laptop or smartphone the data should be encrypted and hardcopy printouts controlled When genomic data are stored or processed i n AWS customers can encrypt their ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 8 of 17 data but there is no portable storage media to consider because all AWS customer data resides on controlled storage media covered under AWS’s accredited security practices When controlled storage media reach the end of their useful life AWS procedures include a decommissioning and media sanitization process that is designed to prevent customer data from being exposed to unauthorized individuals AWS uses the techniques detailed in DoD 522022 M (“National Industrial Security Program Operating Manual” ) or NIST 800 88 (“Guidelines for Media Sanitization”) to destroy data as part of the decommissioning process All decommissioned magnetic storage devices are degaussed and physically destroyed in accordance with industrystandard practices For more information see Overview of Security Processes 2 User Accounts Passwords and Access Control Lists Managing user access under dbGaP requirements relies on a principle of least privilege to ensure that individuals and/or processes are granted only the rights and permissions to perform their assigned tasks and functions but no more3 When you use AWS there are two types of user accounts that you must address :  Accounts with direct access to AWS resources and  Accounts at the operating system or application level Managing user accounts with direct access to AWS resources is centralized in a service called AWS Identity and Access Management (IAM) After you establish your root AWS account using the selfservice signup process you can use IAM to create and manage additional users and groups within your private AWS environment In adherence to the least privilege principle new users and groups have no permissions by default until you associate them with an IAM policy IAM policies allow access to AWS resources and support finegrained permissions allowing operationspecific access to AWS resources For example you can define an IAM policy that restricts an Amazon S3 bucket to readonly access by specific IAM users coming from specific IP addresses In addition to the users you define within your private AWS environment you can define IAM roles to grant temporary credentials for use by externally authenticated users or applications running on Amazon EC2 servers Within IAM you can assign users individual credentials such as passwords or access keys Multifactor authentication (MFA) provides an extra level of user account security by prompting users to enter an additional authentication code each time they log in to AWS dbGaP also requires that users not share their passwords and recommends that researchers communicate a written password policy to any users with permissions to controlled access data Additionally dbGaP recommends certain password complexity rules for file access IAM provides robust features to manage password complexity reuse and reset rules How you manage user accounts at the operating system or application level depends largely on which operating systems and applications you choose For example applications developed specifically for the AWS cloud might leverage IAM users and groups whereas you'll need to assess and plan the compatibility of thirdparty applications and operating systems with IAM on a case bycase basis You should always configure passwordenabled screen savers on any 2 http://mediaamazonwebservicescom/pdf/AWS_Security_Whitepaperpdf 3 http://wwwncbinlmnihgov/projects/gap/pdf/dbgap_2b_security_procedurespdf ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 9 of 17 local workstations that you use to access your private AWS environment and configure virtual server instances within the AWS cloud environment with OSlevel passwordenabled screen savers to provide an additional layer of protection More information on IAM is available in the IAM documentation and IAM Best Practices guide as well as on the MultiFactor Authentication page Internet Networking and Data Transfers The AWS cloud is a set of web services delivered over the Internet but data within each customer’s private AWS account is not exposed directly to the Internet unless you specifically configure your security features to all ow it This is a critical element of compliance with dbGaP security best practices and the AWS cloud has a number of builtin features that prevent direct Internet exposure of genomic data Processing genomic data in AWS typically involves the Amazon Elastic Compute Cloud (Amazon EC2) Amazon EC2 is a service you can use to create virtual server instances that run operating systems like Linux and Microsoft Windows When you create new Amazon EC2 instances for downloading and processing genomic data by default those instances are accessible only by authorized users within the private AWS account The instances are not discoverable or directly accessible on the Internet unless you configure them otherwise Additionally genomic data within an Amazon EC2 instance resides in the operating system ’s file directory which requires that you set OSspecific configurations before any data can be accessible outside of the instance When you need clusters of Amazon EC2 instances to process large volumes of data a Hadoop framework service called Amazon Elastic MapReduce (Amazon EMR) allows you to create multiple identical Amazon EC2 instances that follow the same basic rule of least privilege unless you change the configuration otherwise Storing genomic data in AWS typically involves object stores and file systems like Amazon Simple Storage Service (Amazon S3) and Amazon Elastic Block Store (Amazon EBS) as well as database stores like Amazon Relational Database Service (Amazon RDS) Amazon Redshift Amazon DynamoDB and Amazon ElastiCache Like Amazon EC2 all of these storage and databases services default to least privilege access and are not discoverable or directly accessible from the Internet unless you configure them to be so Individual compute instances and storage volumes are the basic building blocks that researchers use to architect and build genomic data processing systems in AWS Individually these building blocks are private by default and networking them together within the AWS environment can provide additional layers of security and data protections Using Amazon Virtual Private Cloud (Amazon VPC) you can create private isolated networks within the AWS cloud where you retain complete control over the virtual network environment including definition of the IP address range creation of subnets and configuration of network route tables and network gateways Amazon VPC also offers stateless firewall capabilities through the use of Network Access Control Lists (NACLs) that control the source and destination network traffic endpoints and ports giving you robust security controls that are independent of the computational resources launched within Amazon VPC subnets In addition to the stateless firewalling capabilities of Amazon VPC NACLs Amazon EC2 instances and some services are launched within the context of AWS Security Groups Security groups define networklevel stateful firewall rules to protect computational resources at the Amazon EC2 instance or service ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 10 of 17 layer level Using security groups you can lock down compute storage or application services to strict subsets of resources running within an Amazon VPC subnet adhering to the principal of least privilege Figure 2 Protecting data from direct Internet access using Amazon VPC In addition to networking and securing the virtual infrastructure within the AWS cloud Amazon VPC provides several options for connecting to your AWS resources The first and simplest option is providing secure public endpoints to access resources such as SSH bastion servers A second option is to create a secure Virtual Private Network (VPN) connection that uses Internet Protocol Security (IPSec) by defining a virtual private gateway into the Amazon VPC You can use the connection to establish encrypted network connectivity over the Internet between an Amazon VPC and your institutional network Lastly research institutions can establish a dedicated and private network connection to AWS using AWS Direct Connect AWS Direct Connect lets you establish a dedicated highbandwidth (1 Gbps to 10 Gbps) network connection between your network and one of the AWS Direct Connect locations Using industry standard 8021q VLANs this dedicated connection can be partitioned into multiple virtual interfaces allowing you to use the same connection to access public resources such as objects stored in Amazon S3 using public IP address space and private resources such as Amazon EC2 instances running within an Amazon Virtual Private Cloud (Amazon VPC) using private IP space while maintaining network separation between the public and private environments You can reconfigure virtual interfaces at any time to meet your changing needs 1 1 2 2 3 3 dbGaP data in Amazon S3 bucket; accessible only by Amazon EC2 instance within VPC security group Amazon EC2 instance hosts Aspera Connect download software running within VPC security group Amazon VPC network configured with private subnet requiring SSH client VPN gateway or other encrypted connection Amazon S3 bucket w/ dbGaP data EC2 instance w / Aspera Connect ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 11 of 17 Using a combination of hosted and selfmanaged services you can take advantage of secure robust networking services within a VPC and secure connectivity with another trusted network To learn more about the finer details see our Amazon VPC whitepaper the Amazon VPC documentation and the Amazon VPC Connectivity Options Whitepaper Data Encrypti on Encrypting data intransit and at rest is one of the most common methods of securing controlled access datasets As an Internetbased service provider AWS understands that many institutional IT security policies consider the Internet to be an insecure communications medium and consequently AWS has invested considerable effort in the security and encryption features you need in order to use the AWS cloud platform for highly sensitive data including protected health information under HIPAA and controlled access genomic datasets from the National Institutes of Health (NIH) AWS uses encryption in three areas:  Service management traffic  Data within AWS services  Hardware security modules As an AWS customer you use the AWS Management Console to manage and configure your private environment Each time you use the AWS Management Console an SSL/TLS4 connection is made between your web browser and the console endpoints Service management traffic is encrypted data integrity is authenticated and the client browser authenticates the identity of the console service endpoint using an X509 certificate After this encrypted connection is established all subsequent HTTP traffic including data in transit over the Internet is protected within the SSL/TLS session Each AWS service is also enabled with application programming interfaces (APIs) that you can use to manage services either directly from applications or thirdparty tools or via Software Development Kits (SDK) or via AWS command line tools AWS APIs are web services over HTTPS and protect commands within an SSL/TLS encrypted session Within AWS there are several options for encrypting genomic data ranging from completely automated AWS encryption solutions (serverside) to manual clientside options Your decision to use a particular encryption model may be based on a variety of factors including the AWS service(s) being used your institutional policies your technical capability specific requirements of the data use certificate and other factors A s you architect your systems for controlled access datasets it’s important to identify each AWS service and encryption model you will use with the genomic data There are three different models for how you and/or AWS provide the encryption method and work with the key management infrastructure (KMI) as illustrated in Figure 3 4 Secure Sockets Layer (SSL)/Transport Layer Security (TLS) ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 12 of 17 Customer Managed AWS Managed Model A Researcher manages the encryption method and entire KMI Model B Researcher manages the encryption method; AWS provides storage component of KMI while researcher provides management layer of KMI Model C AWS manages the encryption method and the entire KMI Figure 2 Encryption Models in AWS In addition to the clientside and serverside encryption features builtin to many AWS services another common way to protect keys in a KMI is to use a dedicated storage and data processing device that performs cryptographic operations using keys on the devices These devices called hardware security modules (HSMs) typically provide tamper evidence or resistance to protect keys from unauthorized use For researchers who choose to use AWS encryption capabilities for your controlled access datasets the AWS CloudHSM service is another encryption option within your AWS environment giving you use of HSMs that are designed and validated to government standards (NIST FIPS 140 2) for secure key management If you want to manage the keys that control encryption of data in Amazon S3 and Amazon EBS volumes but don’t want to manage the needed KMI resources either within or external to AWS you can leverage the AWS Key Management Service (AWS KMS) AWS Key Management Service is a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data and uses HSMs to protect the security of your keys AWS Key Management Service is integrated with other AWS services including Amazon EBS Amazon S3 and Amazon Redshift AWS Key Management Service is also integrated with AWS CloudTrail discussed later to provide you with logs of all key usage to help meet your regulatory and compliance needs AWS KMS also allows you to implement key creation rotation and usage policies AWS KMS is designed so that no one has access to your master keys The service is built on systems that are designed to protect your master keys with extensive hardening techniques such as never storing plaintext master keys on disk not persisting them in memory and limiting which systems can connect to the device All access to update software on the service is controlled by a multilevel approval process that is audited and reviewed by an independent group within Amazon KMI Encryption Method KMI Encryption Method KMI Encryption Method Key Storage Key Management Key Storage Key Management Key Storage Key Management ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 13 of 17 As mentioned in the Internet Network and Data Transfer section of this paper you can protect data transfers to and from your AWS environment to an external network with a number of encryptionready security features such as VPN For more information about encryption options within the AWS environment see Securing Data at Rest with Encryption as well as the AWS CloudHSM product details page To learn more about how AWS KMS works you can read the AWS Key Management Service whitepaper5 File Systems and Storage Volumes Analyzing and securing large datasets like whole genome sequences requires a variety of storage capabilities that allow you to make use of that data Within your private AWS account you can configure your storage services and security features to limit access to authorized users Additionally when research collaborators are authorized to access the data you can configure your access controls to safely share data between your private AWS account and your collaborator’s private AWS account When saving and securing data within your private AWS account you have several options Amazon Web Services offers two flexible and powerful storage options The first is Amazon Simple Storage Service (Amazon S3) a highly scalable webbased object store Amazon S3 provides HTTP/HTTPS REST endpoints to upload and download data objects in an Amazon S3 bucket Individual Amazon S3 objects can range from 1 byte to 5 terabytes Amazon S3 is designed for 9999% availability and 99999999999% object durability thus Amazon S3 provides a highly durable storage infrastructure designed for missioncritical and primary data storage The service redundantly stores data in multiple data centers within the Region you designate and Amazon S3 calculates checksums on all network traffic to detect corruption of data packets when storing or retrieving data Unlike traditional systems which can require laborious data verification and manual repair Amazon S3 performs regular systematic data integrity checks and is built to be automatically selfhealing Amazon S3 provides a base level of security whereby defaultonly bucket and object owners have access to the Amazon S3 resources they create In addition you can write security policies to further restrict access to Amazon S3 objects For example dbGaP recommendations call for all data to be encrypted while the data are in flight With an Amazon S3 bucket policy you can restrict an Amazon S3 bucket so that it only accepts requests using the secure HTTPS protocol which fulfills this requirement Amazon S3 bucket policies are best utilized to define broad permissions across sets of objects within a single bucket The previous examples for restricting the allowed protocols or source IP ranges are indicative of best practices For data that need more variable permissions based on whom is trying to access data IAM user policies are more appropriate As discussed previously IAM enables organizations with multiple employees to create and manage multiple users under a single AWS account With IAM user policies you can grant these IAM users finegrained control to your Amazon S3 bucket or data objects contained within Amazon S3 is a great tool for genomics analysis and is well suited for analytical applications that are purposebuilt for the cloud However many legacy genomic algorithms and applications cannot work directly with files stored in a HTTPbased object store like Amazon S3 but rather need a traditional file system In contrast to the Amazon S3 objectbased storage approach 5 https://d0awsstaticcom/whitepapers/KMSCryptographicDetailspdf ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 14 of 17 Amazon Elastic Block Store (Amazon EBS) provides networkattached storage volumes that can be formatted with traditional file systems This means that a legacy application running in an Amaz on EC2 instance can access genomic data in an Amazon EBS volume as if that data were stored locally in the Amazon EC2 instance Additionally Amazon EBS offers wholevolume encryption without the need for you to build maintain and secure your own key management infrastructure When you create an encrypted Amazon EBS volume and attach it to a supported instance type data stored at rest on the volume disk I/O and snapshots created from the volume are all encrypted The encryption occurs on the servers that host Amazon EC2 instances providing encryption of data intransit from Amazon EC2 instances to Amazon EBS storage Amazon EBS encryption uses AWS Key Management Service (AWS KMS) Customer Master Keys (CMKs) when creating encrypted volumes and any snapshots created from your encrypted volumes The first time you create an encrypted Amazon EBS volume in a region a default CMK is created for you automatically This key is used for Amazon EBS encryption unless you select a CMK that you created separately using AWS Key Management Service Creating your own CMK gives you more flexibility including the ability to create rotate disable define access controls and audit the encryption keys used to protect your data For more information see the AWS Key Management Service Developer Guide There are three options for Amazon EBS volumes:  Magnetic volumes are backed by magnetic drives and are ideal for workloads where data are accessed infrequently and scenarios where the lowest storage cost is important  General Purpose (SSD) volumes are backed by SolidState Drives (SSDs) and are suitable for a broad range of workloads including small to mediumsized databases development and test environments and boot volumes  Provisioned IOPS (SSD) volumes are also backed by SSDs and are designed for applications with I/Ointensive workloads such as databases Provisioned IOPs offer storage with consistent and lowlatency performance and support up to 30 IOPS per GB which enables you to provision 4000 IOPS on a volume as small as 134 GB You can also achieve up to 128MBps of throughput per volume with as little as 500 provisioned IOPS Additionally you can stripe multiple volumes together to achieve up to 48000 IOPS or 800MBps when attached to larger Amazon EC2 instances While generalpurpose Amazon EBS volumes represent a great value in terms of performance and cost and can support a diverse set of genomics applications you should choose which Amazon EBS volume type to use based on the particular algorithm you're going to run A benefit of scalable ondemand infrastructure is that you can provision a diverse set of resources each tuned to a particular workload For more information on the security features available in Amazon S3 see the Access Control and Using Data Encryption topics in the Amazon S3 Developer Guide For an overview on security on AWS including Amazon S3 see Amazon Web Services: Overview of Security Processes For more information about Amazon EBS security features see Amazon EBS Encryption and Amazon Elastic Block Store (Amazon EBS) Operating Systems and Applications Recipients of controlledaccess data need their operating systems and applications to follow predefined configuration standards Operating systems should align with standards such as ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 15 of 17 NIST 80053 dbGaP Security Best Practices Appendix A or other regionally accepted criteria Software should also be configured according to applicationspecific best practices and OS and software patches should be kept up todate When you run operating systems and applications in AWS you are responsible for configuring and maintaining your operating systems and applications as well as the feature configurations in the associated AWS services such as Amazon EC2 and Amazon S3 As a concrete example imagine that a security vulnerability in the standard SSL/TLS shared library is discovered In this scenario AWS will review and remediate the vulnerability in the foundation services (see Figure 1) and you will review and remediate the operating systems and applications as well as any service configuration updates needed for hybrid deployments You must also take care to properly configure the OS and applications to restrict remote access to the instances and applications Examples include locking down security groups to only allow SSH or RDP from certain IP ranges ensuring strong password or other authentication policies and restricting user administrative rights on OS and applications Auditing Logging and Monitoring Researchers who manage controlled access data are required to report any inadvertent data release in accordance with the terms in the Data Use Certification breach of data security or other data management incidents contrary to the terms of data access The dbGaP security recommendations recommend use of security auditing and intrusion detection software that regularly scans and detects potential data intrusions Within the AWS ecosystem you have the option to use builtin monitoring tools such as Amazon CloudWatch as well as a rich partner ecosystem of security and monitoring software specifically built for AWS cloud services The AWS Partner Network lists a variety of system integrators and software vendors that can help you meet security and compliance requirements For more information see the AWS Life Science Partner webpage6 Amazon CloudWatch is a monitoring service for AWS cloud resources and the applications you run on AWS You can use Amazon CloudWatch to collect and track metrics collect and monitor log files and set alarms Amazon CloudWatch provides performance metrics on the individual resource level such as Amazon EC2 instance CPU load and network IO and sets up thresholds on these metrics to raise alarms when the threshold is passed For example you can set an alarm to detect unusual spikes in network traffic from an Amazon EC2 instance that may be an indication of a compromised system CloudWatch alarms can integrate with other AWS services to send the alerts simultaneous ly to multiple destinations Example methods and destinations might include a message queue in Amazon Simple Queuing Service (Amazon SQS) which is continuously monitored by watchdog processes that will automatically quarantine a system; a mobile text message to security and operations staff that need to react to immediate threats; an email to security and compliance teams who audit the event and take action as needed Within Amazon CloudWatch you can also define custom metrics and populate these with whatever information is useful even outside of a security and compliance requirement For instance an Amazon CloudWatch metric can monitor the size of a data ingest queue to trigger 6 http://awsamazoncom/partners/competencies/lifesciences/ ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 16 of 17 the scaling up (or down) of computational resources that process data to handle variable rates of data acquisition AWS CloudTrail and AWS Config are two services that enable you to monitor and audit all of the operations against th e AWS product API’s AWS CloudTrail is a web service that records AWS API calls for your account and delivers log files to you The recorded information includes the identity of the API caller the time of the API call the source IP address of the API caller the request parameters and the response elements returned by the AWS service With AWS CloudTrail you can get a history of AWS API calls for your account including API calls made via the AWS Management Console AWS SDKs command line tools and hig herlevel AWS services (such as AWS CloudFormation) The AWS API call history produced by AWS CloudTrail enables security analysis resource change tracking and compliance auditing AWS Config builds upon the functionality of AWS CloudTrail and provides you with an AWS resource inventory configuration history and configuration change notifications to enable security and governance With AWS Config you can discover existing AWS resources export a complete inventory of your AWS resources with all configuration details and determine how a resource was configured at any point in time These capabilities enable compliance auditing security analysis resource change tracking and troubleshooting Lastly AWS has implemented various methods of external communication to support all customers in the event of security or operational issues that may impact our customers Mechanisms are in place to allow the customer support team to be notified of operational and security issues that impact each customer’s account The AWS incident management team employs industrystandard diagnostic procedures to drive resolution during businessimpacting events within the AWS cloud platform The operational systems that support the platform are extensively instrumented to monitor key operational metrics and alarms are configured to automatically notify operations and management personnel when early warning thresholds are cross ed on those key metrics Staff operators provide 24 x 7 x 365 coverage to detect incidents and to manage their impact and resolution An oncall schedule is used so that personnel are always available to respond to operational issues Authorizing Access to Data Researchers using AWS in connection with controlled access datasets must only allow authorized users to access the data Authorization is typically obtained either by approval from the Data Access Committee (DAC) or within the terms of the researcher’s existing Data Use Certification ( DUC) Once access is authorized you can grant that access in one or more ways depending on where the data reside and where the collaborator requiring access is located The scenarios below cover the situations that typically arise:  Provide the collaborator access within an AWS account via an IAM user (see User Accounts Passwords and Access Control Lists )  Provide the collaborator access to their own AWS accounts (see File Systems Storage Volumes and Databases )  Open access to the AWS environment to an external network (see Internet Networking and Data Transfers ) ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 17 of 17 Cleaning U p Data and Retaining Results Controlledaccess datasets for closed research projects should be deleted upon project close out and only encrypted copies of the minimum data needed to comply with institutional policies should be retained In AWS deletion and retention operations on data are under the complete control of a researcher You might opt to replicate archived data to one or more AWS regions for disaster recovery or highavailability purposes but you are in complete control of that process As it is for onpremises infrastructure data provenance7 is the sole responsibility of the researcher Through a combination of data encryption and other standard operating procedures such as resource monitoring and security audits you can comply with dbGaP security recommendations in AWS With respect to AWS storage services after Amazon S3 data objects or Amazon EBS volumes are deleted removal of the mapping from the public name to the object starts immediately and is generally processed across the distributed system within several seconds After the mapping is removed there is no remote access to the deleted object The underlying storage area is then reclaimed for use by the system Conclusion The AWS cloud platform provides a number of important benefits and advantages to genomic researchers and enables them to satisfy the NIH security best practices for controlled access datasets While AWS delivers these benefits and advantages through our services and features researchers are still responsible for properly building using and maintaining the private AWS environment to help ensure the confidentiality integrity and availability of the controlled access datasets they manage Using the practices in this whitepaper we encourage you to build a set of security policies and processes for your organization so you can deploy applications using controlled access data quickly and securely Notices © 2014 Amazon Web Services Inc or its affiliates All rights reserved This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS it s affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers 7 The process of tracing and recording the origins of data and its movement between databases
General
Lambda_Architecture_for_Batch_and_Stream_Processing
Lambda Architecture for Batch and Stream Processing October 2018 This paper has been archived For the latest technical content about Lambda architecture see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers Archived © 2018 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its c ustomers Archived Contents Introduction 1 Overview 2 Data Ingestion 3 Data Transformation 4 Data Analysis 5 Visualization 6 Security 6 Getting Started 7 Conclusion 7 Contributors 7 Further Reading 8 Document Revisions 8 Archived Abstract Lambda architecture is a data processing design pattern to handle massive quantities of data and integrate batch and real time processing within a single framework (Lambda architecture is distinct from and should not be confused with the AWS Lambda comput e service ) This paper covers the building blocks of a unified architectural pattern that unifies stream (real time) and batch proces sing After reading this paper you should have a good idea of how to set up and deploy the components of a typical Lambda architecture on AWS This white paper is intended for Amazon Web Services (AWS) Partner Network (APN) members IT infrastructure decision makers and administrators ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 1 Introduction When processing large amounts of semi structured data there is usually a delay between the point when data is collected and its availability in reports and dashboards Often the delay results from the need to validate or at least identify granular data I n some cases however being able to react immediately to new data is more important than being 100 percent certain of the data’s validity The AWS services frequently used to analyze large volumes of data are Amazon EMR and Amazon Athena For ingesting and processing s tream or real time data AWS services like Amazon Kinesis Data Streams Amazon Kinesis Data Firehose Amazon Kinesis Data Analytics Spark Streaming and Spark SQL on top of an Amazon EMR cluster are widely used Amazon Simple Storage Servic e (Amazon S3) forms the backbone of such architectures providing the persistent object storage layer for the AWS compute service Lambda a rchitecture is an approach that mixes both batch and stream (real time) data processing and makes the combined data available for downstream analysis or viewing via a serving layer It is divided into three layers: the batch layer serving layer and speed layer Figure 1 shows the b atch layer (batch processing) serving layer (merged serving layer) and speed layer (stream processing) In Figure 1 data is sent both to the batch layer and to the speed layer (stream processing) In the batch layer new data is appended to the master data set It consists of a set of records containing information that cannot be derived from the existing data It is an immutable append only dataset This process is analogous to extract transform and load (ETL) processing The results of the batch layer are called batch views and are stored in a persis tent storage layer The serving layer indexes the batch views produced by the batch layer It is a scalable Figure 1: Lambda Architecture ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 2 data store that swaps in new batch views as they become available Due to the latency of the batch layer the results from the serving layer are outofdate The speed layer compensates for the high latency of updates to the serving layer from the batch layer The speed layer processes data that has not been processed in the last batch of the batch layer This layer produces the real time views that are always up todate The speed layer is responsible for creating realtime views that are continuously discarded as data makes its way through the batch and serving layers Queries are resolved by merging the batch and real time views Recomputing data from scratch helps if the batch or real time views become corrupt ed This is because the main data set is append only and it is easy to restart and recover from the unstable state The end user can always query the latest version of the data which i s available from the speed layer Overview This section provides an overview of the various AWS services that form the building blocks for the batch serving and speed layers of lambda architecture Each of the layers in the Lambda architecture can be built using various analytics streaming and storage services available on the AWS platform Figure 2: Lambda Architecture Building Blocks on AWS The batch layer consists of the landing Amazon S3 bucket for storing all of the data ( eg clickstream server device logs and so on ) that is dispatched from one or more data sources The raw data in the landing bucket can be extracted and transformed into a batch view for analytics using AWS Glue a fully managed ETL service on the AWS platform Data analysis is performed u sing services like Amazon Athena an interactive query service or managed Hadoop framework using Amazon EMR Using Amazon QuickSight customer s can also perform visualization and onetime analysis ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 3 The speed layer can be built by using the following three options available with Amazon Kinesis : • Kinesis Data Stream s and Kinesis Client Library (KCL) – Data from the data source can be continuously captured and stream ed in near real time using Kinesis Data Stream s With the Kinesis Client Library ( KCL) you can build your own application that can preprocess the streaming data as they arrive and emit the data for generating incremental view s and downstream analysis • Kinesis Data Firehose – As data is ingested in real time customer s can use Kinesis Data Firehose to easily batch and compress the data to generate incremental views Kinesis Data Firehose also allows customer to execute their custom transformation logic using AWS Lambda before delivering the incremental view to Amazon S3 • Kinesis Data Analytics – This service provides the easiest way to process the data that is streaming through Kinesis Data Stream or Kinesis Data Firehose using SQL This enable s customer s to gain actionable insight in near real time from the incremental stream before storing it in Amazon S3 Finally the servin g layer can be implemented with Spark SQL on Amazon EMR to process the data in Amazon S3 bucket from the batch layer and Spark Streaming on an Amazon EMR cluster which consumes data directly from Amazon Kinesis streams to create a view of the entire dataset which can be aggregated merged or joined The merged data set can be written to Amazon S3 for further visualization Both of these components are part of the same code base which can be invoked as required thus reducing the overhead of maintaining multiple code bases The metadata ( eg table definition and schema) associated with the processed data is stored in the AWS Glue catalog to make the data in the batch view i mmediately available for queries by downstream analytics services in the batch layer Customer can use a Hadoop based stream processing application for analytics such as Spark Streaming on Amazon EMR Data Ingestion The data ingestion step comprises data ingestion by both the speed and batch layer usually in parallel For the batch layer historical data can be ingested at any desired interval For the speed layer the fastmoving data must be captured as it is produced and streamed for analysis The data is immutable time tagged or time ordered Some examples of high velocity data include log collection website clickstream logging social media stream and IoT device event data This fast da ta is captured and ingested as part of the speed layer using Amazon Kinesis Data Stream s which is the recommended service to ingest streaming data into AWS Kinesis offers key capabilities to cost effectively process and durably store streaming data at any scale Customers can use Amazon Kinesi s Agent a pre built application to collect and send data to ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 4 an Amazon Kinesis stream or use the Amazon Kinesis Producer Library (KP L) as part of a custom application For batch ingestions customers can use AWS Glue or AWS Database Migration Service to read from source systems such as RDBMS Data Warehouses and No SQL databases Data Transformation Data transformation is a key step in the Lambda architecture where the data is manipulated to suit downstream analysis The raw data ingested into the system in the previous step is usually not conducive to analytics as is The transformation step involves data cleansing that includes deduplication incomplete data management and attribute standardization It also involves changing the data structures if necessary usually into an OLAP model to facilitate easy querying of data Amazon Glue Amazon EMR and Amazon S3 form the set of services that allow users to transform their data Kinesis analytics enables users to get a view into their data stream in real time which makes downstream integration to batch data easy Let’s dive deeper into data transformation and look at the various steps involved: 1 The data ingested via the batch mechanism is put into an S3 staging location This data is a true copy of the source with little to no transformation 2 The AWS Glue Data Catalog is updated with the metadata of the new files The Glue Data Catalog can integrate with Amazon Athena Amazon EMR and forms a central metadata repository for the data 3 An AWS Glue job is used to transform the data and store it into a new S3 location for integration with realtime data AWS Glue provide s many canned transformations but if you need to write your own transformation logic AWS Glue also supports custom scripts 4 Users can easily query data on Amazon S3 using Amazon Athena This helps in making sure there are no unwanted data elements that get into the downstream bucket Getting a view of source data upfront allows development of more targeted metrics Designing analytical applications without a view of source data or getting a very late view into the source data could be risky Since Amazon Athena uses a schema onread approach instead of a schema onwrite it allows users to query data as is and eliminates the risk 5 Amazon Athena integrates with Amazon Quick Sight which allows users to build reports and dashboards on the data 6 For the real time ingestions the data transformation is applied on a window of data as it pass es through the steam and analyzed iteratively as it comes into the stream Amazon Kinesis Data Streams Kinesis Data Firehose and Kinesis Data Analytics allow you to ing est analyze and dump real time data into storage platforms like Amazon S3 for integration with batch data Kinesis Data Streams interfaces with Spark ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 5 streaming which is run on an Amazon EMR cluster for further manipulation Kinesis Data A nalytics allow s you to run analytical queries on the data stream in real time which allows you to get a view into the source data and make sure aligns with what is expected from the dataset By following the preceding steps you can create a scalable data transformatio n platform on AWS It is also important to note that Amazon Glue Amazon S3 Amazon Athena and Amazon Kinesis are serverless services By using these services in the transformation step of the Lambda architecture we can remove the overhead of maintaining servers and scaling them when the volume of data to transform increases Data Analysis In this phase you apply your query to analyze data in the three layers : • Batch Layer – The data source for batch analytics could be the raw master data set directly or the aggregated batch view from the serving layer The focus of this layer is to increase the accuracy of analysis by querying a comprehensive dataset across multiple or all dimensions and all available data sources • Speed Layer – The focus of the analysis in this layer is to analyze the incoming streaming data in near real time and to react immediately based on the analyzed result within accepted levels of accuracy • Serving Layer – In this layer the merged query is aimed at joining and analy zing the data from both the batch view from the batch layer and the incremental stream view from the speed layer This suggested architecture on the AWS platform includes Amazon Athena for the batch layer and Amazon Kinesis Data Analytics for the speed layer For the serving layer we recommend using Spark Streaming on an Amazon EMR cluster to consume the data fr om Amazon Kinesis Data S treams from the speed layer and using Spark SQL on an Amazon EMR cluster to consume data from Amazon S3 in the b atch layer Both of these components are part of the same code base which can be invoked as required thus reducing the overhead of maintaining multiple code bases The sample code that follows highlights using Spark SQL and Spark streaming to join data from both batch and speed layer s ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 6 Figure 2: Sample Code Visualization The final step in the Lambda architecture workflow is metrics visualization The visualization layer receives data from the batch stream and the combined serving layer The purpose of this layer is to provide a unified view of the analysis metrics that were derived from the data analysis step Batch Layer: The output of the analysis metrics in the batch layer is generated by Amazon Athena Amazon QuickSight integrates with Amazon Athena to generate dashboards that can be used for visualizations Customers also have a choice of using any other BI tool that supports JDBC/ODBC connectivity These tools can be connected to Amazon Athena to visualize batch layer metrics Stream Layer: Amazon Kinesis Data Analytics allows users to build custom analytical metrics that change based on real time streaming data Customers can use Kinesis Data A nalytics to build near realtime dashboards for metrics analyzed in the streaming layer Serving Layer: The combined dataset for batch and stream metrics are stored in the serving layer in an S3 bucket This unified view of the data is available for customers to download or connect to a reporting tool like Amazon QuickSight to create dashboards Security As part of the AWS Shared Responsibility M odel we recommend customers use the AWS security best practices and features to build a highly secure platform to run Lambda architecture on AWS Here are some points to keep in mind from a security perspective: • Encrypt end to end The architecture proposed here makes use of services that support encryption Make use of the native encryption features of the service whenever possible The server side encryption (SSE) is the least disruptive way to ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 7 encrypt your data on AWS and allows you to integrate encryption features into your existing workflows without a lot of code changes • Follow the rule of minimal access when working with policies Identity and access management (IAM) policies can be made very granular to allow customers to create restrictive resource level policies This concept is also exte nded to S3 bucket policies Moreover customers can use S3 object level tags to allow or deny actions at the object level Make use of these capabilities to ensure the resources in AWS are used securely • When working with AWS services make use of IAM role instead of embedding AWS credentials • Have an optimal networking architecture in place by carefully considering the security groups a ccess control lists (ACL) and routing tables that exist in the Amazon Virtual Private Cloud (Amazon VPC ) Resources that do not need access to the internet should not be in a public subnet Resources that require only outbound internet access should make use of the n etwork address translation (NAT) gateway to allow outbound traffic Communication to Amazon S3 from within th e Amazon VPC should make use of the VPC endpoint for Amazon S3 or a AWS private link Getting Started Refer to the AWS Big Data blog post Unite Real Time and Batch Analytics Using the Big Data Lambda Architecture Without Servers! which provides a walkthrough of how you can use AWS services to build an end toend Lambda architecture Conclusion The Lambda architecture described in this paper provides the building blocks of a unified architectural pattern that unifies stream (real time) and batch processing within a single code base Through the use of Spark Streaming and Spark SQL APIs you implement your business logic function once and then reuse the code in a batch ETL process as well as for real time streaming processes In this way you can quickly implement a real time layer to complement the batch processing one In the long term this archit ecture will reduce your maintenance overhead It will also reduce the risk for errors resulting from duplicate code bases Contributors The following individuals and organizations contributed to this document: • Rajeev Sriniv asan Solutions Architect Amazo n Web Services • Ujjwal Ratan S olutions Architect Amazon Web Services ArchivedAmazon Web Services – Lambda Architecture for Batch and Stream Processing on AWS Page 8 Further Reading For additional information see the following : • AWS Whitepapers • Data Lakes and Analytics on AWS Document Revisions Date Description October 2018 Update May 2015 First publication Archived
General
Extend_Your_IT_Infrastructure_with_Amazon_Virtual_Private_Cloud
ArchivedExtend Your IT Infrastructure with Amazon Virtual Private Cloud December 2018 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers/Archived © 201 8 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Contents Notices 2 Contents 3 Abstract 4 Introduction 1 Understanding Amazon Virtual Private Cloud 1 Different Levels of Network Isolation 2 Example Scenarios 7 Host a PCI Compliant E Commerce Website 7 Build a Development and Test Environment 8 Plan for Disaster Recovery and Business Continuity 10 Extend Your Data Center into the Cloud 10 Create Branch Office and Business Unit Networks 12 Best Practices for Using Amazon VPC 13 Automate the Deployment of Your Infrastructure 14 Use Multi AZ Deployments in VPC for High Availability 14 Use Security Groups and Network ACLs 15 Control Access with IAM Users and Policies 15 Use Amazon CloudWatch to Monitor the Health of Your VPC Instances and VPN Link 16 Conclusion 17 Further Reading 17 Document Revisions 18 Archived Abstract Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the AWS cloud where you can launch AWS resources in a virtual network you define This paper provides an overview of how you can connect an Amazon V PC to your existing IT infrastructure while meeting security and compliance requirements This allows you to access AWS resources as though they are a part of your existing networkArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 1 Introduction With Amazon Virtual Private Cloud (Amazon VPC) you can provision a private isolated section of the Amazon Web Services (AWS) cloud where you can launch AWS resources in a virtual network that you define With Amazon VPC you can define a virtual network topology that closely resembles a traditional network th at you might operate in your own data center You have complete control over your virtual networking environment including selection of your own IP v4 address range creation of subnets and configuration of route tables and network gateways For example with VPC you can: • Expand the capacity of existing on premises infrastructure • Launch a backup stack of your environment for disaster recovery purposes • Launch a Payment Card Industry Data Security Standard (PCI DSS) compliant website that accepts secure pa yments • Launch isolated development and testing environments • Serve virtual desktop applications within your corporate network In a traditional approach to these use cases you would need a lot of upfront investment to build your own data center provision the required hardware acquire the necessary security certifications hire system administrators and keep everything running With VPC on AWS you have little upfront investment and you can scale your infrastructure in or out as necessary You get all of the benefits of a secure environment at no extra cost; AWS security controls certifications accreditations and features me et the security criteria required by some of the most discerning and security conscious customers in large enterprise as well as governmental agencies For a full list of certifications and accreditations see the AWS Compliance Center This paper highlights common use cases and best practices for Amazon VPC and related services Understanding Amazon Virtual Private Cloud Amazon VPC is a secure private and isolated section of the AWS cloud where you can launch AWS resources in a virtual network topology that you define When you create a VPC you provide a set of private IP v4 addresses that you want instances in your VPC to use You specify this set of addresses in the form of a Classless Inter Domain ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 2 Routing (CIDR) block for example 10000/16 You can assign block sizes of between /28 (16 IP v4 addresses) and /16 (65536 IP v4 addresses) You can also add a set of IPv6 addresses to your VPC IPv6 addresses are allocated from an Amazon owned range of add resses and the VPC receives a /56 (more than 1021 IPv6 addresses) In Amazon VPC each Amazon Elastic Compute Cloud (Amazon EC2) instance has a default network interface that is assigned a primary private IP address on your Amazon VPC network You can cre ate and attach additional elastic network interfaces (ENI) to any Amazon EC2 instance in your VPC Each ENI has its own MAC address It can have multiple IPv6 or private IP v4 addresses and it can be assigned to a specific security group The total number of supported ENIs and private IP addresses per instance depends on the instance type The ENIs can be created in different subnets within the same Availability Zone a nd attached to a single instance to build for example a low cost management network or network and security appliances The secondary ENIs and private IP addresses can be moved within the same subnet to other instances for lowcost high availability sol utions To each private IP v4 address you can associate a public elastic IP v4 address to make the instance reachable from the Internet IPv6 addresses are the same whether inside the VPC or on the public Internet (if the subnet is public ) You can also con figure your Amazon EC2 instance to be assigned a public IPv4 address at launch Public IP v4 addresses are assigned to your instances from the Amazon pool of public IP v4 addresses; they are not associated with your account With support for multiple IPv6 addresses private IPv4 addresses and Elastic IP addresse s you can among other things use multiple SSL certificates on a single server and associate each certificate with a specific IP address There are some default limits on the number of compon ents you can deploy in your VPC as documented in Amazon VPC Limits To request an increase in any of these limits fill out the Amazon VPC Limits form Different Levels of Network Isolation You can set up your VPC subnets as public private or VPN only In order to set up a public subnet you have to configure its routing table so that traffic from that subnet to the Internet is routed through an Internet gateway associated with the VPC as shown in Figure 1 By assigning EIP addresses to instances in that subnet you can make them reachable from the Internet over IPv4 as well It is a best prac tice to restrict both ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 3 ingress and egress traffic for these instances by leveraging stateful security group rules for your instances You can also use network a ddress translation ( NAT ) gateways (for IPv4 traffic) and egress only gateways (for IPv6 traffic) on private subnets to enable them to reach Internet addresses without allowing inbound traffic Stateless network filtering can also be applied for each subnet by setting up network access control lists (ACLs) for the subnet Figure 1: Example of a VPC with a public subnet only For private subnets traffic to the Internet can be routed through a NAT gateway or NAT instance with a public EIP that resides in a public subnet This configuration allows your resources in the private subnet to connect outbound traffic to the Internet without allocating Elastic IP addresse s or accepting direct inbound conne ctions AWS provides a managed NAT gateway or you can use your own Amazon EC2 based NAT appliance Figure 2 shows an example of a VPC with both public and private subnets using an AWS NAT gateway ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 4 Figure 2: Example of a VPC with public and private subnets By attaching a virtual private gateway to your VPC you can create a VPN connection between your VPC and your own data center for IPv4 traffic as shown in Figure 3 The VPN connection uses industry standard IPsec tunnels (IKEv1 PSK with encryption using AES256 and HMAC SHA2 with various Diffie Hellman groups ) to mutually authenticate each gateway and to protect against eavesdropping or tampering while your data is in transit For redundancy each VPN connection has two tunnels with each tunnel using a unique virtual private gateway public IP v4 address ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 5 Figure 3: Example of a VPC isolated from the Internet and connected through VPN to a corporate data center You have two routin g options for setting up a VPN connection: dynamic routing using Border Gateway Protocol (BGP) or static routing For BGP you need the IP v4 address and the BGP autonomous system number (ASN) of the customer gateway before attaching it to a VPC Once you ha ve provided this information you can download a configuration template for a number of different VPN devices and configure both VPN tunnels For devices that do not support BGP you may set up one or more static routes back to your on premises network by providing the corresponding CIDR ranges when you configure your VPN connection You then configure static routes on your VPN customer gateway and on other internal network devices to route traffic to your VPC via the IPsec tunnel If you choose to have onl y a virtual private gateway with a connection to your on premises network you can route your Internet bound traffic over the VPN and control all egress traffic with your existing security policies and network controls You can also use AWS Direct Connect to establish a private logical connection from your on premises network directly to your Amazon VPC AWS Direct Connect provides a private high bandwidth network connection between your network and your VPC You can use multiple logical connection s to establish private connectivity to multiple VPCs while maintaining network isolation With AWS Direct Connect you can establish 1 Gbps or 10 Gbps dedicated network connections between AWS and any of the AWS Direct Connect locations A dedicated connection can be partitioned into multiple logical connections by using industry standard 8021Q VLANs In this way you can use the same connection to access public ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 6 resources such as objects stored in Amazon Simple Storage Service (Amazon S3) that use public ly accessible IPv4 and IPv6 address es and private resources such as Amazon EC2 instances that are running within a VPC using Amazon owned IPv6 space or private IPv4 space —all while maint aining network separation between the public and private environments You can choose a partner from the AWS Partner Network (APN) to integrate the AWS Direct Connect endpoint in an AWS Direc t Connect location with your remote networks Figure 4 shows a typical AWS Direct Connect setup Figure 4: Example of using VPC and AWS Direct Connect with a customer remote network Finally you may combine all of these diffe rent options in any combination that make the most sense for your business and security policies For example you could attach a VPC to your existing data center with a virtual private gateway and set up an addit ional public subnet to connect to other AWS services that do not run within the VPC such as Amazon S3 Amazon Simple Queue Service (Amazon SQS) or Amazon Simple Notification Service (Amazon SNS) In this situation you could also leverage IAM Roles for Amazon EC2 for accessing these services and configure IAM policies to only allow access from the Elastic IP address of the NAT server ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 7 Example Scenarios Becau se of the inherent flexibility of Amazon VPC you can design a virtual network topology that meets your business and IT security requirements for a variety of different use cases To understand the true potential of Amazon VPC let’s take a few of the most common use cases: • Host a PCI compliant e commerce website • Build a development and test environment • Plan for disaster recovery and business continuity • Extend your data center into the cloud • Create branch office and business unit networks Host a PCI Complia nt ECommerce Website Ecommerce websites often handle sensitive data such as credit card information user profiles and purchase history As such they require a Payment Card Industry Data Security Standard (PCI DSS) compliant infrastructure in order to protect sensitive customer data Because AWS is accredited as a Level 1 service provider under PCI DSS you can run your application on PCI compliant technology infrastructure for storing processing and transmitting credit card information in the cloud As a merchant you still have to manage your own PCI certification but by using an accredited infrastructure service provider you don’t need to put additional effort into PCI compliance at the infrastructure level For more information about PCI complia nce see the AWS Compliance Center For example you can create a VPC to host the customer database and manage the checkout process of your ecommerce website To offer high availability you set up private subnets in each Availability Zone within the same region and then deploy your customer and order management databases in each Availability Zone Your checkout servers will be in an Auto Sca ling group over several private subnets in different Availability Zones Those servers will be behind an elastic load balancer that spans public subnets across all used Availability Zones and the elastic load balancer can be protected by a n AWS w eb applic ation firewall (WAF) By combining VPC subnets network ACLs and security groups you have fine grained control over access to your AWS infrastructure You’ll be prepared for the main challenges —scalability security ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 8 elasticity and availability —for the most sensitive part of commerce websites Figure 5 shows an example of a n ECommerce architecture Figure 5: Example of a n ECommerce architecture Build a Development and Test Environment Software environments are in constant flu x with new versions features patches and updates Software changes must often be deployed rapidly with little time to carry out regression testing Your ideal test environment would be an exact replica of your production environment where you would ap ply your updates and then test them against a typical workload When the update or new version passes all tests you can roll it into production with greater confidence To build such a test environment in house you would have to provision a lot of hardwa re that would go unused most of the time Sometimes this unused hardware is subsequently repurposed leaving you without your test environment when you need it Amazon VPC can help you build an economical functional and isolated test environment that sim ulates your live production environment that can be launched when you need it and shut down when you’re finished testing You don’t have to buy expensive hardware; you are more flexible and agile when your environment changes; your test environment can tra nsparently interact within your on premises network by using LDAP messaging and monitoring; and you pay AWS only for what you actually ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 9 use This process can even be fully automated and integrated into your software development process Figure 6 shows an example of development test and production environment s within different VPCs Figure 6: Example of development test and production environment s The same logic applies to experimental applications When you are eval uating a new software package that you want to keep isolated from your production environment you can install it on a few Amazon EC2 instances inside your test environment within a VPC and then give access to a selected set of internal users If all goes well you can transition these images into production and terminate unneeded resources ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 10 Plan for Disaster Recovery and Business Continuity The consequences of a disaster affecting your data center can be devastating for your business if you are not prepared for such an event It is worth spending time devising a strategy to minimize the impact on your operations when these events happen Trad itional approaches to disaster recovery usually require labor intensive backups and expensive standby equipment Instead consider including Amazon VPC in your disaster recovery plan The elastic dynamic nature of AWS is ideal for disaster scenarios where there are sudden spikes in resource requirements Start by identifying the IT assets that are most critical to your business As in the test environment described previously in this paper you can automate the replication of your production environment to duplicate the functionality of your critical assets Using automated processes you can back up your production data to Amazon Elastic Block Store (Amazon EBS) volumes or Amazon S3 buckets Database contents can be continually replicated to your AWS infra structure using AWS Database Migration Service (AWS DMS) You can write declarative AWS CloudFormation templates to describe your VPC infrastructure stack which you can launch automatically in any AWS region or Availability Zone In the event of a disaste r you can quickly launch a replication of your environment in the VPC and then direct your business traffic to those servers If a disaster involves only the loss of data from your in house servers you can recover it from the Amazon EBS data volumes that you’ve been using as backup storage For more information read Using Amazon Web Services for Disaster Recovery which is available at the AWS Architecture Center Extend Your Data Center into the Cloud If you have invested in building your own data center you may be facing challenges to keep up with constantly changing capacity requirements Occasional spikes in demand may exceed your total capacity If your enterprise is successful even routine operations will eventually reach the capacity limits of your data center and you’ll have to decide how to extend that capacity Building a new data center is one way but it is expensive and slow and the risk of underprovisioning or overprovisioning is high In both of these cases Amazon VPC can help you by serving as an extension of your own data center ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 11 Amazon VPC allows you to specify your own IP address range so you can ext end your network into AWS in much the same way you would extend an existing network into a new physical data center or branch office VPN and AWS Direct Connect connectivity options allow these networks to be seamlessly and securely integrated to create a single corporate network capable of supporting your users and applications regardless of where they are physically located And just like a physical extension of a data center IT resources hosted in VPC will be able to leverage existing centralized IT systems like user authentication monitoring logging change management or deployment services without the need to change how users or systems administrators access or manage your applications External connectivity from this extended virtual data cente r is also completely up to you You may choose to direct all VPC traffic to traverse your existing network infrastructure to control which existing internal and external networks your Amazon EC2 instances can access This approach for example allows you to leverage all of your existing Internet based network controls for your entire network Figure 7 shows an example of a data center that has been extended into AWS Figure 7: Example of a data center extended into AWS that leverages a customer’s existing connection to the Internet Additionally you could also choose to leverage the extensive Internet connectivity of AWS to offload traffic from on premises firewalls and load balancers and selectively present IPv6 endpoints ev en if your on premises network only supports IPv4 You can deploy an AWS WAF to protect your infrastructure against attacks leverage an application load balancer in your VPC to direct traffic to a mix of AWS based and on premises resources using a VPN con nection to provide a seamless end user experience as shown in Figure 8 ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 12 Figure 8: Example of a data center extended into AWS that leverages multiple connections to the Internet Create Branch Office and Business Unit Networks If you have branch offices that require separate but interconnected local networks consider deploying separate VPCs for each branch office Applications can easily communicate with each other using VPC peering subject to VPC security group rules that you app ly The VPCs can even be in different AWS accounts and different regions which can help reduce latency enhance resource isolation and enable cost allocation controls If you need to limit network communication within or across subnets you can configure security groups or network ACL rules to define which instances are permitted to communicate with each other You could also use this same idea to group applications according to business unit functions Applications specific to particular business units c an be installed in separate VPCs one for each unit Figure 9 shows an example of using VPC s and VPN s for branch office scenarios ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 13 Figure 9: Example of using VPC and VPN for branch office scenarios The main advantages of using Amazon VPC over provisioning dedicated on premises hardware in a branch office are similar to those described elsewhere: you can elastically scale resources up down in and out to meet demand ensuring that you don’t underprovision or overprovision Adding capacity is easy: launch additional Amazon EC2 instances from your custom Amazon Machine Images (AMIs) When the time comes to decrease capacity simply terminate the unneeded instances manually or automatically using Auto Scaling policies Althou gh the operational tasks may be the same to keep assets running properly you won’t need dedicated remote staff and you’ll save money with the AWS pay asyouuse pricing model Best Practices for Using Amazon VPC When using Amazon VPC there are a few bes t practices you should follow : • Automate the deployment of your infrastructure • Use Multi AZ deployments in VPC for high availability • Use security groups and network ACLs • Control access with IAM users and policies • Use Amazon CloudWatch to monitor the health of your VPC instances and VPN link ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 14 Automate the Deployment of Your Infrastructure Managing your infrastructure manually is tedious error prone slow and expensive For example in the case of a disaster recovery your plan should include only a limited number of manual steps because they slow down the process Even in less critical use cases such as development and test environments we recommend that you ensure that your standby environment is an exact replica of the production environment Manually re plicating your production environment can be very challenging and it increases the risk of introducing or not discovering bugs related to dependencies in your deployment By automating the deployment with AWS CloudFormation you can describe your infrastructure in a declarative way by writing a template You can use the template to deploy predefined stacks within a very short time in any AWS region The template can fully a utomate creation of subnets routing information security groups provisioning of AWS resources —whatever you need By using AWS CloudFormation helper scripts you can use standard Amazon Machine Images (AMIs) that will upon startup of Amazon EC2 instance s install all of the software at the right version required for your deployment Automated infrastructure deployment should be fully integrated into your processes You should treat your automation scripts like software that needs to be tested and maintai ned according to your standards and policies A continuous deployment methodology using services such as AWS CodePipeline to orchestrate the full process through build test and deploy phases can help make infrastructure deployment a regular and well tested business process Thoroughly tested automated processes are often faster cheaper more reliable and more secure than processes that rely on many manual steps Use Multi AZ Deployments in VPC for High Availability Architectures designed for high ava ilability typically distribute AWS resources redundantly across multiple Availability Zones within the same region If a service disruption occurs in one Availability Zone you can redirect traffic to the other Availability Zone to limit the impact of the disruption This general best practice also applies to architectures that include Amazon VPC ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 15 Although a VPC can span multiple Availability Zones each subnet within the VPC is restricted to a single Availability Zone In order to deploy a multi AZ Amazon Relational Database Service (Amazon RDS) instance for example you first have to configure VPC subnets in each Availability Zone within the region where the database instances will be launched Likewise Auto Scaling groups and elastic load balancers can span multiple Availability Zones by being deployed across VPC subnets that have been created for each zone Use Security Groups and Network ACLs Amazon VPC security groups allow you to control both ingress and egress traffic and you can define rules for a ll IP protocols and ports For a full overview of the features available with Amazon VPC security groups see Security Groups for Your VPC Amazon VPC security groups are stateful firewalls allowing return traffic for permitted TCP connections A network access control list ( ACL) is an additional layer of security that acts as a firewall to control traffic into and out of a subnet You can define access control rules for each of your subnets Although a VPC security group operates at the instance level a network ACL operates at the subnet level For a network ACL you can specify both allow and deny rules for both ingress and egress Network ACLs are stateless firewalls ; return traffic for TCP connections must be explicitly allowed on the TCP ephemeral ports (typically 32768 65535) As a best practice you should secure your infrastructure with multiple layers of defense By running your infrastructure in a VPC you can control which instances are exposed to the Internet in the first place and you can define both security groups and network ACLs to further protect your infrastructure at the infrastructure and subnet levels Additionally you should secure your i nstances with a firewall at the operating system level and follow other security best practices as outlined in AWS Security Resources Control Access with IAM Users and Policies With AWS Identity and Access Management (IAM) you can create and manage users in your AWS account A user can be either a person or an application that needs to interact with AWS With IAM you can centrally manage your users their security credentials such as access credentials and permissions that control which AWS ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 16 resources the users can access You typically create IAM users for users and use IAM roles for applications We recommend that you use IAM to implement a least privilege security strategy For exam ple you should not use a single AWS IAM user to manage all aspects of your AWS infrastructure Instead we recommend that you define user groups (or roles if using federated logins) for the different tasks that have to be performed on AWS and restrict each user to exactly the functionality he or she requires to perform that role For example you can create a network admin group of users in IAM and then give only that group the rights to create and modify the VPC For each user group define restrictive p olicies that grant each user access only to those services he or she needs Make sure that only authorized people in your organization have access to these users Use services such as Amazon GuardDuty to detect anomalous access patterns Implement strong a uthentication requirements such as minimum password length and complexity and consider multifactor authentication to reduce the risk of compromising your infrastructure For more information on how to define IAM users and policies see Controlling Access to Amazon VPC Resources Use Amazon CloudWatch to Monitor the Health of Your VPC Instances and VPN Link Just as you do with public Amazon EC2 instances you can use Amazo n CloudWatch to monitor the performance of the instances running inside your VPC Amazon CloudWatch provides visibility into resource utilization operational performance and overall demand patterns including CPU utilization disk reads and writes and n etwork traffic The information is displayed on the AWS Management Console and is also available through the Amazon CloudWatch API so you can integrate into your existing management tools You can also view the status of your VPN connections by using eithe r the AWS Management Console or making an API call The status of each VPN tunnel will include the state (up/down) of each VPN tunnel and the amount of traffic seen across the VPN tunnels ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 17 Conclusion Amazon VPC offers a wide range of tools that give you mo re control over your AWS infrastructure Within a VPC you can define your own network topology by defining subnets and routing tables and you can restrict access at the subnet level with network ACLs and at the resource level with VPC security groups Yo u can isolate your resources from the Internet and connect them to your own data center through a VPN You can assign elastic IP v4 and public IPv6 addresses to some instances and connect them to the public Internet through an Internet gateway while keeping the rest of your infrastructure in private subnets Amazon VPC makes it easier to protect your AWS resources while you keep the benefits of AWS with regard to flexibility scalability elasticity performance availability and the pay asyouuse pricing model Further Reading • Amazon VPC product page: https://awsamazoncom/vpc/ • Amazon VPC documentati on: https://awsamazoncom/documentation/vpc/ • AWS Direct Connect product page: https://awsamazoncom/directconnect/ • Getting started with AWS Direct Connect: https://awsamazoncom/directconnect/getting started/ • AWS Security Center: https://awsamazoncom/security/ • Ama zon VPC Connectivity Options: https://mediaamazonwebservicescom/AWS_Amazon_VPC_Connectivity_Opti onspdf • AWS VPN CloudHub: https://docsawsamazoncom/AmazonVPC/latest/UserGuide/VPN_CloudHub html • AWS Security Best Practices: https://awsamazoncom/whitepapers/aws security best practices/ • Architecting for the Cloud: Best Practices: http://mediaamazonwebservicescom/AWS_Cloud_Best_Practicespdf ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 18 Document Revisions Date Description December 2018 Added IPv6 features Removed references to EC2 classic Added AWS DMS AWS CodePipeline Amazon GuardDuty Changed multiple subnet strategy to multiple VPC VPC peering CloudHub Removed recommendation to change credentials regularly (no longer NIST recommended); added complexity and MFA December 2013 Major revision to reflect new functionality of Amazon VPC Added new use cases for Amazon VPC Added section “Understanding Amazon Virtual Private Cloud” Added section “Best Practices for Using Amazon VPC” January 2010 First publication
General
Building_Media__Entertainment_Predictive_Analytics_Solutions_on_AWS
Building Media & Entertainment Predictive Analytics Solutions on AWS First published December 2016 Updated March 30 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2021 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 Overview of AWS Enabled M&E Workloads 1 Overview of the Predictive Analytics Process Flow 3 Common M&E Predictive Analytics Use Cases 6 Predictive Analytics Archi tecture on AWS 8 Data Sources and Data Ingestion 9 Data Store 13 Processing by Data Scientists 14 Prediction Processing and Serving 22 AWS Services and Benefits 23 Amazon S3 23 Amazon Kinesis 24 Amazon EMR 24 Amazon Machine Learning (Amazon ML) 25 AWS Data Pipeline 25 Amazon Elastic Compute Cloud (Amazon EC2) 25 Amazon CloudSearch 26 AWS Lambda 26 Amazon Relational Database Service (Amazon RDS) 26 Amazon DynamoD B 26 Conclusion 27 Contributors 27 Abstract This whitepaper is intended for data scientists data architects and data engineers who want to design and build Media and Entertainment ( M&E ) predictive analytics solutions on AWS Specifically this paper provide s an introduction to common cloud enabled M&E workloads and describes how a predictive analytics workload fits into the overall M&E workflows in the cloud The paper provide s an overview of the main phases for the predictive analytics business process as well as an overview of comm on M& E predictive analytics use case s Then the paper describes the technical reference architecture and tool options for implementing predictive analytics solutions on AWS Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 1 Introduction The world of Media and Entertainment (M&E) has shifted from treating custo mers as mass audiences to forming connection s with individuals This progression was enabled by unlocking insights from data generated through new distribution platforms and web and social networks M&E companies a re aggressivel y moving from a traditional mass broadcasting business model to an Over The Top (OTT) model where relevant data can be gathered In this new model they are embracing the challenge of acquiring enriching and retaining customers through big data and predictive analytic s solutions As cloud technology adoption becomes mainstream M&E companies are moving many analytics workload s to AWS to achieve ag ility scale lower cost rapid innovation and operational efficiency As these companies start their journey to the cloud they have questions about c ommon M&E use case s and how to design build and operate these solutions AWS provides many services i n the data and analytics space that are well suited for all M&E analytics workloads including traditional BI reporting real time analytics and predictive analytics In this paper we discuss the approach to architecture and tools We’ll cover design build and operate aspects of predictive analytics in subsequent papers Overview of AWS Enabled M&E Workloads M&E c ontent producers have traditionally relied heavily on systems located on premise s for production and post production workloads Content produ cers are increasingly looking into the AWS Cloud to run workloads This is d ue to the huge increase in the volume of content from new business models such as on demand and other online delivery as well as new content formats such as 4k and high dynamic r ange ( HDR ) M&E customers deliver live linear on demand and OTT content with the AWS Cloud AWS services also enable media partners to build solutions across M&E lines of business Examples include: • Managing digital assets • Publishing digital content • Automating media supply chains • Broadcast master control and play out Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 2 • Streamlining content distribution to licensees • Affiliates (business to business B2B) • Direct to consumer ( business to consumer B2C) channels • Solutions for content and customer analytics using real time data and machine learning Figure 1 is a diagram that shows a typical M&E workflow with a brief description of each area Figure 1 — Cloud enabled M&E workflow Acquisition — Workloads that capture and ingest media contents such as videos audio and images into AWS VFX & NLE — Visual Effects (VFX) and nonlinear editing system (NLE) workloads that allow editing of im ages for visual effects or nondestruc tive editing of video and audio source files DAM & Archive — Digital asset management (DAM) and archive solutions for the management of media assets Media Supply Chain — Workloads that manage the process to deliver digital asset s such as video or music from the point of origin to the destinat ion Publishing — Solutions for media contents publishing OTT — Systems that allow the delivery of aud io content and video content over the Internet Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 3 Playout & Distribution — Systems that support the transmission of media contents and channels into the broadcast network Analytics — Solutions that provide business intelligence and predictive analytics capabilities on M&E data Some typical domain questions to be answered by the analytics solutions are: How do I segment my customers for email campaign? What videos should I be promoting at the top of audiences OTT/VOD watchlists? Who is at risk of cancelling a subscription? What ads can I target mid roll to maximize audience engagement? What is the aggregate trending sentiment regarding titles brands prop erties and talents across social media and where is it headed? Overview of the Predictive Analytics Process Flow There are two main categories of analytics : business and predictive Business analytics focus on reporting metrics for historical and real time data Predictive analytics help predict future events and provide estimations by applying predictive modeling that is based on historical and real time data This paper will only cover predictive analytics A predictive analytics initiative involves man y phases and is a highly iterative process Figure 2 shows some of the main phases in a predictive analytics project with a brief description of each phase Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 4 Figure 2 — Cross industry standards process for data mining 1 Business Understanding — The main objective of this phase is to develop an understanding of the business goals and t hen t ranslate the goals into predictive analytics objectives For the M&E industry examples of business goals could include increasing con tent consumption by existing customers or understanding social sentiment toward contents and talents to assist with new content development The associated predictive analytics goals could also include personalized content recommendations and sentiment a nalysis of social data regarding contents and talents 2 Data Understanding — The goal of this phase is to consider the data required for predictive analytics Initial data collection exploration and quality assessment take place during this phase To dev elop high quality models the dataset needs to be relevant complete and large enough to support model training Model training is the process of training a machine learning model by providing a machine learning algorithm with training data to learn from Some relevant datasets for M&E use case s are customer information/profile data content viewing history data content rating data social engagement data customer behavioral data content subscription data and purchase data Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 5 3 Data Preparation — Data preparation is a critical step to ensure that highquality predictive models can be generated In this phase the data required for each modeling process is selected Data acquisition mechanisms need to be created Data is integrated formatted transformed and enriched for the modeling purpose Supervised machine learning algorithms require a labeled training dataset to generate predictive models A labeled training dataset has a target prediction variable and other dependent data attributes or features The quality of the training data is often considered more important than the machine learning algorithms for performance improvement 4 Modeling — In this phase the appropriate modeling techniques are selected for different modeling and business objec tives For example : o A clustering technique could be employed for customer segmentation o A binary classification technique could be used to analyze customer churn o A collaborative filtering technique could be applied to content recommendation The perform ance of the model can be evaluated and tweaked using technical measures such as Area Under Curve (AUC) for binary classification (Logistic Regression) Root Mean Square (RMSE) for collaborative filtering (Alternating Least Squares) and Sum ofSquared Error (SSE) for clustering (K Means) Based on the initial evaluation of the model result the model setting s can be revised and fine tuned by going back to the data preparation stage 5 Model Evaluation — The generated models are formally evaluated in this phase not only in terms of technical measures but also in the context of the business success criteria set out during the business understanding phase If the model properly addresses the initial business objectives it can be approved and prepared for deployment 6 Deployment — In this phase the model is deployed into an environment to generate predictions for future events Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 6 Common M&E Predictive Analyt ics Use Cases To a certain extent some of the predictive analytics use case s for the M&E industry do not differ much from other industries The following are common use case s that apply to the M&E industry Customer segmentation — As the engagement betw een customers and M&E companies become s more direct across different channels and as more data is collected on those engagements appropriate segmentation of customers becomes increasingly important Customer relationship management (CRM) strategies incl uding customer acquisition customer development and customer retention greatly rely upon such segmentation Although customer segmentation can be achieved using basic business rules it can only efficiently handle a few attributes and dimensions A dat adriven segmentation with a predictive modeling approach is more objective and can handle more complex datasets and volumes Customer segmentation solution s can be implemented by leveraging clustering algorithms such as k means which is a type of unsup ervised learning algorithm A clustering algorithm is used to find natural clusters of customers based on a list of attributes from the raw customer data Content recommendation — One of the most widely adopted predictive analytics by M&E companies this type of analytics is an importan t technique to maintain customer engagement and increase content consumption Due to the huge volume of available content customers need to be guided to the content they might find most interesting Two comm on algorithms leveraged in recommendation solutions are content based filtering and collaborative filtering • Content based filtering is based on how similar a particular item is to other items based on usage and rating The model uses the content attribut es of items (categories tags descriptions and other data) to generate a matrix of each item to other items and calculates similarity based on the ratings provided Then the most similar items are listed together with a similarity score Items with the highest score are most similar Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 7 • Collaborative filtering is based on making predictions to find a specific item or user based on similarity with other items or users The filter applies weights based on peer user preferences The assumption is users who di splay similar profile or behavior have similar preferences for items More advanced recommendation solutions can leverage deep learning techniques for better performance One example of this would be using Recurrent Neural Networks (RNN) with collaborative filtering by predicting the sequence of items in previous streams such as past purchases Sentiment analysis — This is the process of categorizing words phrases and other contextual information into subjective feelings A common outcome fo r sentiment analysis is positive negative or neutral sentiment Impressions publicized by consumers can be a val uable source of insight into the opinions of broader audiences These insights when employed in real time can be used to significantly enhan ce audience engagement Insights can also be used with other analytic learnings such as customer segmentation to identify a positive match between an audience segment and associated content There are many tools to analyze and identify sentiment and many of them rely on linguistic analysis that is optimized for a specific context From a machine learning perspective one traditional approach is to consider sentiment analysis as a classification problem The sentiment of a document sentence or word is cl assified with positive negative or neutral labels In general the algorithm consists of tokenization of the text feature extraction and classification using different classifiers such as linear classifiers (eg Support Vector Machine Logistic Regre ssion) or probabilistic classifiers (eg Naïve Bayes Bayesian Network) However this traditional approach lacks recognition for the structur e and subtleties of written language A more advanced approach is to use deep learning algorithm s for sentiment analysis You don’t need to provide these models with predefined features as the model can learn sophisticated features from the dataset The words are represented in highly dimensional vectors and features are extracted by the neural netwo rk Examples of deep learning algorithms that can be used for sentiment analysis are Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN) MXNet Tensorflow and Caffe are some deep learning frameworks that are well suited for RNN and CNN model training AWS makes it easy to get started with these frameworks by providing an Amazon Machine Image (AMI) that includes these frameworks preinstalled This AMI can be run on a large number of instance types including the P2 instances that provide general Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 8 purpose GPU processing for deep learning applications The Deep Learning AMI is available in the AWS Marketplace Churn prediction — This is the identification of customers who are at risk of no longer being customers Churn prediction helps to identify where to deploy retention resources most effectively The data used in churn prediction is generally user activity data related to a specif ic service or content offering This type of analysis is generally solved using a logistic regression with a binary classification The binary classification is designated as customer leave predicted or customer retention predicted Weightings and cutoff values can be used with predictive models to tweak the sensitivity of predictions to minimize false positives or false negatives to optimize for business objectives For example Amazon Machine Learning (Amazon ML) has an input for cutoff and sliders for precision recall false positive rate and accuracy Predictive Analytics Architecture on AWS AWS includes the components needed to enable pipelines for predictive analytics workflows There are many viable architectural patterns to effectively compute pr edictive analytics In this section we discuss some of the technology options for building predictive analytics architecture on AWS Figure 3 shows one such conceptual architecture Figure 3 — Conceptual reference architecture Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 9 Data Sources and Data Ingestion Data collection and ingestion is the first step and one of the most important technical architecture components to the overall predictive analytics architecture At a high level the main s ource data required for M&E analytics can be classified into the following categories • Dimension data — Provides structured labeling information to numeric measures Dimension data is mainly used for grouping filtering and labeling of information Exampl es of dimension data are customer master data demographics data transaction or subscription data content metadata and other reference data These are mostly structured data stored in relational databases such as CRM Master Data Management (MDM) or Digital Asset Management (DAM) databases • Social media d ata — Can be used for sentiment analysis Some of the main social data sources for M&E are Twitter YouTube and Facebook The data could encompass content ratings reviews social sharing tagging and bookmarking events • Event data — In OTT and online media examples of event data are audience engagement behaviors with st reaming videos such as web browsing patterns searchi ng events for content video play/watch/stop events and device data These are mostly real time click streaming data from website s mobile apps and OTT players • Other relevant data — Includes data from aggregators (Nielson comS core etc) advertising response data customer contacts and service case data There are two main modes of data ingestion into AWS : batch and streaming Batch Ingestion In this mode data is ingested as files (eg database extracts) following a specified schedule Data ingestion approaches include the following Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 10 • Third party applications — These applications have connector integration with Amazon Simple Storage Service (Amazon S3) object storage that can ingest data into Amazon S3 buckets The applications can either take source files or extract data from the source database directly and store them in Amazon S3 There are commercial products (eg Informatica Talend) and open source utilities ( eg Embulk ) that can extract data from databases and export the data into a n Amazon S3 bucket directly • Custom applications using AWS SDK/APIs — Custom applications can use AWS SDKs and the Amazon S3 application programming interface (API) to ingest data into target Amazon S3 buckets The SDKs and API also support multipart upload for faster data transfer to Amazon S3 buckets • AWS Data Pipeline — This service facilitates moving data between different sources including AWS services AWS Data Pipeline launch es a task runner that is a Linux based Amazon Elastic Compute Cloud (Amazon EC2) instance which can run scripts and commands to move data on a n event based or scheduled basis • Command line interface (CLI) — Amazon S3 also provides a CLI for interacting and ingesting data into Amazon S3 buckets • File synchronization utilities — Utilities such as rsynch and s3synch can keep source data directories in sync with Amazon S3 buckets as a way to move files from source locations to Amazon S3 buc kets Streaming Ingestion In this mode data is ingested in streams (eg clickstream data) Architecturally there must be a streaming store that accepts and stores streaming data at scale and in real time Additionally data collectors that collect dat a at the sources are needed to send data to the streaming store • Stream ing stores — There are various options for the streaming stores Amazon Kinesis Stream s and Amazon Kinesis Firehose are fully managed stream ing stores Streams and Firehose also provide SDKs and APIs for programmatic integration Alternatively open source platforms such as Kafka can be installed and configured on EC2 clusters to manage streaming data ingestion and storage Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 11 • Data collector s — These can be web mobile or OTT appl ications that send data directly to the streaming store or collector agents running next to the data sources (eg clickstream logs) that send data to the streaming store in real time There are several options for the data collectors Flume and Flentd are two open source data collectors that can collect log data and send data to streaming stores An Amazon Kinesis agent can be used as the data collector for Streams and Firehose One common practice is to ingest all the input data into staging Amazon S3 buckets or folders first perform further data processing and then store the data in target Amazon S3 buckets or folders Any data processing related to data quality (eg data completeness invalid data) should be handled at the sources when possible and is not discussed in this document During this stage the following data processing might be needed • Data transformatio n — This could be transformation of source data to the defined common standards For example breaking up a single name field into first name middle name and last name field s • Metadata extraction and persistence — Any metadata associated with input files s hould be extracted and stored in a persistent store This could include file name file or record size content description data source information and date or time information • Data enrichment — Raw data can be enha nced and refined with additional infor mation For example you can enrich source IP addresse s with geographic data • Table schema creation and maintenance — Once the data is processed into a target structure you can create the schemas for the target systems File Formats The various file formats have tradeoffs regarding compatibility storage efficiency read performance write performance and schema extensibility In the Hadoop ecosystem there are many variations of file based data stores The following are some of the more common ones i n use Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 12 • Comma Separated Values (CSV) — CSV typically the lowest common denominator of file formats excels at providing tremendous compatibility between platforms It’s a common format for going into and out of the Hadoop ecosystem This file type can be easily inspected and edited with a text editor which provides flexibility for ad hoc usage One drawback is poor support for compression so the files tend to take up more storage space than some other available formats You should also note that CSV sometimes has a header row with column names Avoid using this with machine learning tools because it inhibits the ability to arbitrarily split files • JavaScript Object Notation (JSON) — JSON is similar to CSV in that text editors can consume this format easily JSON records can be stored using a delimiter such as a newline character as a demarcation to split large data sets across multiple files However JSON files include some additional metadata whereas CSV files typically do not when used in Hadoop JSON files with one record should be avoided because this would generally result in too many small files • Apache Parquet — A columnar storage format that is integrated into much of the Hadoop ecosystem Parquet allows for compression schemes to be specified on a per column level This provides the flexibility to take advantage of compression in the right places without the penalty of wasted CPU cycles compressi ng and de compressing data that doesn’t need compressing Parquet is also flexible for encoding columns Selecting the right encoding mechanism is also important to maximize CPU utiliz ation when reading and writing data Because of the columnar format Parquet can b e very efficient when processing jobs that only require reading a subset of columns However this columnar format also comes with a write penalty if your processing includes writes • Apache Avro — Avro can be used as a file format or as an object format that is used within a file format such as Parquet Avro uses a binary data format requiring less space to represent the same data in a text format This results in lower processing demands in terms of I/O and memory Avro also has the advantage of being compressible further reducing the storage size and increasing disk read performance Avro includes schema data and data that is defined in JSON while still being persisted in a binary format The Avro data format is flexible and expressive allowin g for schema evolution and support for more complex data structures such as nested types Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 13 • Apache ORC — Another column based file format designed for high speed within Hadoop For flat data structures ORC has the advantage of being optimized for reads tha t use predicates in WHERE clauses in Hadoop ecosystem queries It also compresses quite efficiently with compression schemes such as Snappy Zlib or GZip • Sequence files — Hadoop often uses sequence files as temporary files during processing steps of a M apReduce job Sequence files are binary and can be compressed to improve performance and reduce required storage volume Sequence files are stored row based with sync ending markers enabling splitting However any edits will require the entire file to be rewritten Data Store For the data stor e portion of your solution you need storage for the data derived data lake schemas and a metadata data catalogue As part of that a critical decision to make is the type or types of data file formats you will pr ocess Many types of object models and storage formats are used for machine learning Common storage locations include databases and files From a storage perspective Amazon S3 is the preferred storage option for data science proces sing on AWS Amazon S3 provides highly durable storage and seamless integration with various data processing services and machine learning platforms on AWS Data Lake Schemas Data lake schema s are Apache HIVE tables that supp ort SQLlike data querying using Hadoop based query tools such as Apache HIVE Spark SQL and Presto Data lake schemas are based on the schema onread design which means table schemas can be created after the source data is already loaded into the data store A data lake schema uses a HIVE metastor e as the schema repository which can be accessed by different query engines In addition t he tables can be created and managed using the HIVE engine directly Metadata Data Catalogue A metadata data catalogue contain s information about the data in the data store It can be loosely categorized into three areas: technical operational and business Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 14 • Technical metadata refers to the forms and structure of the data In addition to data types technical metadata can also contain information about what data is valid and the data’s sensitivity • Operational metadata captures information such as the source of the data time of ingestion and what ingested the data Operat ional metadata can show data lineage movement and transformation • Business metadata provides labels and tags for data with business level attribute s to make it easier for someone to search and brows e data in the data store There are different options to process and store metadata on AWS One way is to trigger AWS Lambda functions by using Amazon S3 events to extract or derive metadata from the input files and store metadata in Amazon DynamoDB Processing by Data Scien tists When all relevant data is available in the data store data scientists can perform offline data exploration and model selection data preparation and model training and generation based on the defined business objectives The following solutions were selected because they are ideal for handling the large amount of data M&E use case s generate Interactive Data Exploration To develop the data understanding needed to support the modeling process data scientists often must explore the available datasets and determine their usefulness This is normally an interactive and iterative process and require s tools that can query data quickly across massive amount s of datasets It is also useful to be able to visualize the data with graphs charts and maps Table 1 provides a list of data exploration tools available on AWS followed by some specific examples that can be used to explore the data interactively Table 1: Data exploration tool options on AWS Query Style Query Engine User Interface Tools AWS Services SQL Presto AirPal JDBC/ODBC Clients Presto CLI EMR Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 15 Query Style Query Engine User Interface Tools AWS Services Spark SQL Zeppelin Spark Interactive Shell EMR Apache HIVE Apache HUE HIVE Interactive Shell EMR Programmatic R/SparkR (R) RStudio R Interactive Shell EMR Spark(PySpark Scala) Zeppelin Spark Interactive Shell EMR Presto on Amazon EMR The M&E datasets can be stored in Amazon S3 and are accessible as external HIVE tables An external Amazon RDS database can be deployed for the HIVE metastore data Presto running in an Amazon EMR cluster can be used to run interactive SQL queries against the data sets Presto supports ANSI SQL so you can run complex quer ies as well as aggregation against any dataset size from gigab ytes to petabytes Java Database Connectivity ( JDBC ) and Open Database Connectivity ( ODBC ) drivers support connections from data vis ualization tools such as Qlikview Tableau and Presto for rich data visualization Web tools such as AirPal provide an easy touse web front end to run Presto queries directly Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 16 Figure 4 — Data exploration with Presto Apache Zeppelin with Spark on EMR Another tool for data exploration is Apache Zeppelin notebook with Spark Spark is a general purpose cluster computing system It provides high level APIs fo r Java Python Scala and R Spark SQL an in memory SQL engine can integrate with HIVE external tables using HiveContext to query the da taset Zeppelin provides a fr iendly user interface to interact with Spark and visualize data using a range of charts and tables Spark SQL can also support JDBC/ODBC connectivity through a server running Thrift EMR Data Storage on S3 HIVE Metastore DB BI Tool JDBC/ODBC Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 17 Figure 5 — Data exploration with Zeppelin R/SparkR on EMR Some data scientists like to use R /RStudio as the tool for data exploration and analysis but feel constrained by the limitations of R such as single threaded execution and small data size support SparkR provides both the interactive environment rich statistical libraries and visualization of R Additionally SparkR provides the scalable fast distributed storage and processing capability of Spark SparkR uses DataF rame s as the data structure which is a distributed collection of data organized into named columns DataFrames can be constructed fro m wide array of data sources including HIVE tables EMR Data Storage on S3 HIVE Metastore DB Zeppelin Notebook Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 18 Figure 6 — Data exploration with Spark + R Training Data Preparation Data scientists will need to prepare training data to support supervised and unsupervised model training Data is formatted transformed and enriched for the modeling purpose As only the relevant data variable should be included in the model training feature selection is often performed to remove unneeded and irrelevant attributes that do not cont ribute to the accura cy of the predictive model Amazon ML provides feature transformation and feature selection capability that simplifies this process Labeled training dataset s can be stored in Amazon S3 for easy access by machine learning services and f ramework s Interactive Model Training To generate and select the right models for the target business use case s data scientists must perform interactive model training against the tr aining data Table 2 provides a list of use cases with potential product s that can you can use to create your solution followed by several example architectures for interactive model training EMR Data Storage on S3 HIVE Metastore DB Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 19 Table 2 — Machine learning options on AWS M&E Use Case ML Algorithms ML Software AWS Services Segmentation Clustering (eg k Means) Spark ML Mahout R EMR Recommendation Collaborative Filtering (eg Alternating Least Square) Spark ML Apache Mahout EMR Neural Network MXNet Amazon EC2/GPU Customer Churn Classification (eg Logistic Regression) Managed Service Amazon Machine Learning Spark ML Apache Mahout R EMR Sentiment Analysis Classification (eg Logistic Regression) Managed Service Amazon Machine Learning Classification (eg Support Vector Machines Naïve Bayes) Spark ML Mahout R EMR Neural Network MXNet Caffe Tensorflow Torch Theano Amazon EC2/GPU Amazon ML Architecture Amazon ML is a fully managed machine learning service that provides the quickest way to get started with model training Amazon ML can support long tail use case s such as churn and sentiment analysis where logistic regression (for classification) or linear regression (for the prediction of a numeric value) algorithms can be applied The followi ng are the main steps of model training using Amazon ML Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 20 1 Data source creation — Label training data is loaded directly from the Amazon S3 bucket where the data is stored A target column indicating the prediction field must be selected as part the data source creation 2 Feature processing — Certain variables can be transformed to improve the predictive power of the model 3 ML model generation — After the data source is created it can be used to train the machine learning mode l Amazon ML automatically split s the labeled training set into a training set (70%) and an evaluation set (30%) Depending on the selected target column Amazon ML automatically picks one of three algorithms ( binary logistic regression multinomial logist ic regression or linear regression) for the training 4 Performance evaluation — Amazon ML provides model evaluation features for model performance assessment and allows for adjustment to the error tolerance threshold All trained models are stored and man aged directly within the Amazon ML service and can be used for both batch and real time prediction Spark ML/Spark MLlib on Amazon EMR Architecture For the use case s that require other machine learning algorithms such as clustering (for segmentation) and collaborative filtering (for recommendation) Amazon EMR provides cluster management support for running Spark ML To use Spark ML and Spark MLlib for interactive data modeling data scientist s have two choices They can use Spark shell by SSH’ing onto the master node of the EMR cluster or use data science notebook Zeppelin running on the EMR cluster master node Spark ML or Spark MLlib support s a range of machine learning algorithms for classification regression collaborative filter ing clustering decomposition and optimization Another key benefit of Spark is that the same engine can perform data extraction model training and interactive query A data scientist will need to programmatically train the model using languages such as Java Python or Scala Spark ML provides a set of APIs for creating and tuning machine learning pipelines The following are the main concepts to understand for pipeline s Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 21 • DataFrame — Spark ML uses a DataFrame from Spark SQL as an ML dataset For example a DataFrame can have different columns corresponding to different columns in the training dataset that is stored in Amazon S3 • Transformer — An algorithm that can transform one DataFrame into another DataFrame For instance an ML model is a Transformer that transforms a DataFrame with features into a DataFrame with predictions • Estimator — An algorithm that can fit on a DataFrame to produce a transformer • Parameter — All transformers and estimators share a common API for specifying parameters • Pipeline — Chains multiple Transformers and Estimators to specify an ML workflow Spark ML provides two approaches for model selection : cross validation and validation split With cross validation the dataset is split into multiple folds that are used as separat e training and test datasets Two thirds of each fold are used for training and onethird of each fold is used for testing This approach is a wellestablished method for choosing parameters and is more statistical ly sound than heuristic tuning by hand However it can be very expensive as it cross validates over a grid of parameters With validation split the dataset is split into a training asset and a test data asset This approach is less expensive but when the training data is not sufficiently large it won’t produce results that are as reliable as using cross validation Spark ML supports a method to export models in the Predictive Model Markup Language (PMML) format The trained model can be exported a nd persisted into an Amazon S3 bucket using the model save function The saved models can then be deployed into other environment s and loaded for generating prediction Machine Learning on EC2 /GPU /EMR Architecture s For use case s that require dif ferent ma chine learning frameworks that are not supported by A mazon ML or Amazon EMR these frameworks can be installed and run on EC2 fleet s An AMI is available with preinstalled machine learning Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 22 packages including MXNet CNTK Caffe Tensorflow Theano and Torch Additional machine learning packages can be added easily to EC2 instances Other machine learning frameworks can also be installed on Amazon EMR via bootstrap actions to take advantage of the EMR cluster management Examples include Vowpal Wabbit Skytree and H2O Prediction Processing and Serving One architecture pattern for serving predictions quickly using both historic and new data is the lambda architecture The components for this architecture include a batch layer speed layer and serving layer all working together to enable up todate predictions as new data flows into the system Despite its name this pattern is not related to the AWS Lambda service The following is a brief description for each portion of the pattern shown in Figure 7 • Event data — Eventlevel data is typically log data based on user activity This could be data captured on websites mobile devices or social media activities Amazon Mobile Analytics provides an easy way to capture user activity for mobile devices The Amazon Kinesis Agent makes it easy to ingest log data such as web logs Also the Amazon Kinesis Producer Library (KPL) makes it easy to programmatically ingest data int o a stream • Streaming — The streaming layer ingests data as it flows into the system A popular choice for processing streams is Amazon Kinesis Streams because it is a managed service that minimiz es administration and maintenance Amazon Kinesis Firehose c an be used as a stream that stores all the records to a data lake such as an Amazon S3 bucket Figure 7 — Lambda architecture components Event Data Streaming Speed Layer Serving Layer Data Lake Batch Layer Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 23 • Data lake — The data lake is the storage layer for big data associated with event level data generated by M&E users The popular choice in AWS is Amazon S3 for highly durable and scalable data • Speed layer — The speed layer continually updat es predictive results as new data arrives This layer processes less data than the batch layer so the results may not be as accurate as the batch layer However the results are more readily available This layer can be implemented in Amazon EMR using Spark Streaming • Batch layer — The batch layer processes machine learning models using the full set of event level data available This processing can take longer but ca n produce higher fidelity predictions This layer can be implemented using Spark ML in Amazon EMR • Serving layer — The serving layer respond s to predictions on an ongoing basis This layer arbitrate s between the results generated by the batch and speed la yers One way to accomplish this is by storing predictive results in a NoSQL database such as DynamoDB With this approach predictions are stored on an ongoing basis by both the batch and speed layers as they are processed AWS Services and Benefits Mach ine learning solutions come in many shapes and sizes Some of t he AWS services commonly used to build machine learning solutions are described in the following sections During the predictive analytics process work flow different resources are needed throughout different parts of the lifecycle AWS services work well in this scenario because resources can run on demand and y ou pay only for the services you consume Once you stop using them there are no additional costs or terminat ion fees Amazon S3 In the context of machine learning Amazon S3 is an excellent choice for storing training and evaluation data Reasons for this choice include its provision of highly parallelized low latency access that it can store vast amounts of structure d and unstructured data and is low cost Amazon S3 is also integrated into a useful ecosystem of tools and other services extending the functionality of Amazon S3 for ingestion and processing of new data For example Amazon Kinesis Firehose can be used to capture streaming data AWS Lambda event Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 24 based triggers enable serverless compute processing when data arrives in an Amazon S3 bucket Amazon ML uses Amazon S3 as input for training and evaluation dataset s as well as for batch predictions Amazon EMR with its ecosystem of machine learning tools also benefits from using Amazon S3 buckets for storage By using Amazon S3 EMR clusters can decouple storage and compute which has the advantage of scaling eac h independently It also facilitates using transient clusters or multiple clusters for reading the same data at the same time Amazon Kinesis Amazon Kinesis is a platform for streaming data on AWS offering powerful services to make it easy to load and analyze streaming data The Amazon suite of services also provid es the ability for you to build custom streaming data applications for specialized needs One such use case is applying machine learning to stream ing data There are three Amazon Kinesis services that fit different needs : • Amazon Kinesis Firehose accepts streaming data and persists the data to persistent storage including Amazon S3 Amazon Redshift and Amazon Elasticsearch Service • Amazon Kinesis Analytics lets you gain insights from streaming data in real time using standard SQL Analytics also include advanced functions such as the Random Cut Forest which calculates anomalies on streaming datasets • Amazon Kinesis Streams is a streaming service that can be used to create custom streaming applications or integrate into other applications such as Spark Streaming in Amazon EMR for real time Machine Learning Library (MLlib) workloads Amazon EMR Amazon EMR simplifies big data processing providing a managed Hadoop framework This approach makes it easy fast and cost effective for you to distribute and process vast amounts of data across dynamically scalable Amazon EC2 instances You can also run o ther popular distributed frameworks such as Apache Spark and Presto in Amazon EMR and interact with data in other AWS data stores such as Amazon S3 and Amazon DynamoDB The large ecosystem of Hadoop based machine learning tools can be used in Amazon EMR Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 25 Amazon Machine Learning (Amazon ML) Amazon ML is a service that makes it easy for developers of all skill levels to use machine learning technology Amazon ML provides visualization tools and wizards that guide you through the process of creating machine learning models without having to learn complex machine learning algorithms and technology Once your models are ready Amazon ML makes it easy to obtain predictions for your application using simple APIs without having to implement custom prediction generation code or manage any infrastructure Amazon ML is based on the same proven highly scalable machine learning technology used for years by Amazon’s internal data scientist community The service uses powerful algorithms to create machine learning models by finding patterns in your existing data Then Amazon ML uses these models to process new data and generate predictions for your application Amazon ML is highly scalable and can generate billion s of predictions daily and serve those predictions in real time and at high throughput With Amazon ML there is no upfront hardware or software investment and you pay as you go so you can start small and scale as your application grows AWS Data Pipeli ne AWS Data Pipeline is a web service that helps you reliably process and move data between different AWS compute and storage services as well as on premise s data sources at specified intervals With Data Pipeline you can regularly access your data where it’s stored trans form and process it at scale and efficiently transfer the results to AWS services such as Amazon S3 Amazon RDS Amazon DynamoDB and Amazon EMR Data Pipeline helps you easily create complex data processing workloads that are fault tolerant repeatable and highly available You don’t have to worry about ensuring resource availability managing intertask dependencies retrying transient failures or timeouts in individual tasks or creating a failure notification system Data Pipeline also enables you to m ove and process data that was previously locked up in on premise s data silos unlocking new predictive analytics workloads Amazon Elastic Compute Cloud (Amazon EC2) Amazon EC2 is a simple yet powerful compute service that provid es complete control of server instances that can be used to run many machine learning packages The EC2 instance type options include a wide variety of options to Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 26 meet the various needs of machine learni ng packages These include compute optimized instances with relatively more CPU cores memory optimized instances for packages that use lots of RAM and massively powerful GPU optimized instances for packages that can take advantage of GPU processing power Amazon CloudSearch Amazon CloudSearch is a managed service in the AWS Cloud that makes it simple and cost effective to set up manage and scale a search solution for your website or application In the context of predictive analytics architecture CloudSearch can be used to serve prediction outputs for the various use cases AWS Lambda AWS Lambda lets you run code without provisioning or managing servers With Lambda you can run code for virtually any type of application or backend service all with zero administration In the predictive analytics architecture Lambda can be used for tasks such a s data processing triggered by events machine learning batch job scheduling or as the back end for microservices to serve prediction results Amazon Relational Database Service (Amazon RDS) Amazon RDS makes it e asy to set up operate and scale a relational database in the cloud It provides cost efficient and resizable capacity while managing time consuming database administration tasks freeing you up to focus on your applications and business In the predicti ve analytics architecture Amazon RDS can be used as the data store for HIVE metastore s and as the database for servicing prediction results Amazon DynamoDB Amazon DynamoDB is a fast and flexible NoSQL dat abase service ideal for any applications that need consistent single digit millisecond latency at any scale It is a fully managed cloud database and supports both document and key value store models In the predictive analytics architecture DynamoDB ca n be used to store data processing status or metadata or as a database to serve prediction results Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 27 Conclusion In this paper we provided an overview of the common Media and Entertainment (M&E) predictive analytics use case s We presented an architectur e that uses a broad set of services and capabilities of the AWS Cloud to enable both the data scientist workflow and the predictive analytics generation workflow in production Contributors The following individuals and organizations contributed to this do cument: • David Ping solutions architect Amazon Web Services • Chris Marshall solutions architect Amazon Web Services Document revisions Date Description March 30 2021 Reviewed for technical accuracy February 24 2017 Corrected broken links added links to libraries and incorporated minor text updates throughout December 2016 First publication
General
Introduction_to_AWS_Security_by_Design
1 of 14 Introduction to AWS Security by Design A Solution to Automate Security Compliance and Auditing in AWS November 2015 Amazon Web Services – Introduction Secure by Design November 2015 2 of 14 © 2015 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Amazon Web Services – Introduction Secure by Design November 2015 3 of 14 Contents Abstract 4 Introduction 5 Security in the AWS Environment 5 Security by Design: Overview 6 Security by Design Approach 6 Impact of Security by Design 8 SbD Approach Details 9 SbD: How to Get Started 12 4 of 14 Abstract Security by Design (SbD) is a security assurance approach that enables customers to formalize AWS account design automate security controls and streamline auditing This whitepaper discusses the concepts of Security by Design provides a fourphase approach for security and compliance at scale across multiple industries points to the resources available to AWS customers to implement security into the AWS environment and describes how to validate controls are operating 5 of 14 Introduction Security by Design (SbD) is a security assurance approach that enables customers to formalize AWS account design automate security controls and streamline auditing It is a systematic approach to ensure security; instead of relying on auditing security retroactively SbD provides you with the ability to build security control in throughout the AWS IT management process SbD encompasses a fourphase approach for security and compliance at scale across multiple industries standards and security criteria AWS SbD is about designing security and compliance capabilities for all phases of security by designing everything within the AWS customer environment: the permissions the logging the use of approved machine images the trust relationships the changes made enforcing encryption and more SbD enables customers to automate the frontend structure of an AWS account to make security and compliance reliably coded into the account Security in the AWS Environment The AWS infrastructure has been designed to provide the highest availability while putting strong safeguards in place regarding customer privacy and segregation When deploying systems in the AWS Cloud AWS and its customers share the security responsibilities AWS manages the underlying infrastructure while your responsibility is to secure the IT resources deployed in AWS AWS allows you to formalize the application of security controls in the customer platform simplifying system use for administrators and allowing for a simpler and more secure audit of your AWS environment There are two aspects of AWS security: Security of the AWS environment The AWS account itself has configurations and features you can use to build in security Identities logging functions encryption functions and rules around how the systems are used and networked are all part of the AWS environment you manage Security of hosts and applications The operating systems databases stored on disks and the applications customers manage need security protections as well This is up to the AWS customer to manage Security process tools Amazon Web Services – Introduction Secure by Design November 2015 6 of 14 and techniques which customers use today within their onpremise environments also exist within AWS The Security by Design approach here applies primarily to the AWS environment The centralized access visibility and transparency of operating with the AWS cloud provides for increased capability for designing endtoend security for all services data and applications in AWS Security by Design: Overview SbD allows customers to automate the fundamental structure to reliably code security and compliance of the AWS environment making it easier to render noncompliance for IT controls a thing of the past By creating a secure and repeatable approach to the cloud infrastructure approach to security; customers can capture secure and control specific infrastructure control elements These elements enable deployment of security compliant processes for IT elements such as predefining and constraining the design of AWS Identify and Access Management (IAM) AWS Key Management Services (KMS) and AWS CloudTrail SbD follows the same general concept as Quality by Design or QbD Quality by Design is a concept first outlined by quality expert Joseph M Juran in Juran on Quality by Design Designing for quality and innovation is one of the three universal processes of the Juran Trilogy in which Juran describes what is required to achieve breakthroughs in new products services and processes The general shift in manufacturing companies moving to a QbD approach is to ensure quality is built into the manufacturing process moving away from using postproduction quality checks as the primary way in which quality is controlled As with QbD concepts Security by Design can also be planned executed and maintained through system design as a reliable way to ensure realtime scalable and reliable security throughout the lifespan of a technology deployment in AWS Relying on the audit function to fix present issues around security is not reliable or scalable Security by Design Approach SbD outlines the inheritances the automation of baseline controls the operationalization and audit of implemented security controls for AWS infrastructure operating systems services and applications running in AWS This Amazon Web Services – Introduction Secure by Design November 2015 7 of 14 standardized automated and repeatable architectures can be deployed for common use cases security standards and audit requirements across multiple industries and workloads We recommend building in security and compliance into your AWS account by following a basic fourphase approach: • Phase 1 – Understand your requirements Outline your policies and then document the controls you inherit from AWS document the controls you own and operate in your AWS environment and decide on what security rules you want to enforce in your AWS IT environment • Phase 2 – Build a “secure environment” that fits your requirements and implementation Define the configuration you require in the form of AWS configuration values such as encryption requirements (forcing server side encryption for S3 objects) permissions to resources (which roles apply to certain environments) which compute images are authorized (based on hardened images of servers you have authorized) and what kind of logging needs to be enabled (such as enforcing the use of CloudTrail on all resources for which it is available) Since AWS provides a mature set of configuration options (with new services being regularly released) we provide some templates for you to leverage for your own environment These security templates (in the form of AWS CloudFormation Templates) provide a more comprehensive rule set that can be systematically enforced We have developed templates that provide security rules that conform to multiple security frameworks and leading practices These prepackaged industry template solutions are provided to customers as a suite of templates or as stand alone templates based on specific security domains (eg access control security services network security etc) More help to create this “secure environment” is available from AWS experienced architects AWS Professional Services and partner IT transformation leaders These teams can work alongside your staff and audit teams to focus on high quality secure customer environments in support of thirdparty audits • Phase 3 – Enforce the use of the templates Enable Service Catalog and enforce the use of your template in the catalog This is the step which enforces the use of your “secure environment” in new Amazon Web Services – Introduction Secure by Design November 2015 8 of 14 environments that are being created and prevents anyone from creating an environment that doesn’t adhere to your “secure environment” standard rules or constraints This effectively operationalizes the remaining customer account security configurations of controls in preparation for audit readiness • Phase 4 – Perform validation activities Deploying AWS through Service Catalog and the “secure environment” templates creates an auditready environment The rules you defined in your template can be used as an audit guide AWS Config allows you to capture the current state of any environment which can then be compared with your “secure environment” standard rules This provides audit evidence gathering capabilities through secure “read access” permissions along with unique scripts which enable audit automation for evidence collection Customers will be able to convert traditional manual administrative controls to technically enforced controls with the assurance that if designed and scoped properly the controls are operating 100% at any point in time versus traditional audit sampling methods or pointintime reviews This technical audit can be augmented by preaudit guidance; support and training for customer auditors to ensure audit personnel understand the unique audit automation capabilities which the AWS cloud provides Impact of Security by Design SbD Architecture is meant to achieve the following: • Creating forcing functions that cannot be overridden by the users without modification rights • Establishing reliable operation of controls • Enabling continuous and realtime auditing • The technical scripting your governance policy The result is an automated environment enabling the customer’s security assurance governance security and compliance capabilities Customers can now get reliable implementation of what was previously written in policies standards and regulations Customers can create enforceable security and compliance which in turn creates a functional reliable governance model for AWS customer environments Amazon Web Services – Introduction Secure by Design November 2015 9 of 14 SbD Approach Details Phase 1 – Understand Your Requirements Start by performing a security control rationalization effort You can create a security Controls Implementation Matrix (CIM) that will identify inherency from existing AWS certifications accreditations and reports as well as identify the shared customer architecture optimized controls which should be implemented in any AWS environment regardless of security requirements The result of this phase will provide a customer specific map (eg AWS Control Framework) which will provide customers with a security recipe for building security and compliance at scale across AWS services CIM works to map features and resources to specific security controls requirements Security compliance and audit personnel can leverage these documents as a reference to make certifying and accrediting of systems in AWS more efficient The matrix outlines control implementation reference architecture and evidence examples which meet the security control “risk mitigation” for the AWS customer environment Figure 1: NIST SP 80053 rev 4 control security control matrix • Security Services Provided (Inherency) Customers can reference and inherit security control elements from AWS based on their industry and the AWS associated certification attestation and/or report (eg PCI FedRAMP ISO etc) The inheritance of controls can vary based on certifications and reports provided by AWS • Cross Service Security (Shared) Cross service security controls are those which both AWS and the customer implement within the host operating system and the guest operating systems These controls include technical operational and administrative (eg IAM Security Groups Configuration Management etc) controls which in some case can be partially inherited (eg Fault Amazon Web Services – Introduction Secure by Design November 2015 10 of 14 Tolerance) Example: AWS builds its data centers in multiple geographic regions as well as across multiple Availability Zones within each region offering maximum resiliency against system outages Customers should leverage this capability by architecting across separate Availability Zones in order to meet their own fault tolerance requirements • Service Specific Security (Customer) Customer controls may be based on the system and services they deploy in AWS These customer controls may also be able to leverage several cross service controls such as IAM Security Groups and defined configuration management processes • Optimized IAM Network and Operating Systems (OS) Controls These controls are security control implementations or security enhancements an organization should deploy based on leading security practices industry requirements and/or security standards These controls typically cross multiple standards and service and can be scripted as part of a defined “secure environment” through the use of AWS CloudFormation templates and Service Catalog Phase 2 – Build a “Secure Environment” This enables you to connect the dots on the wide range of security and audit services and features we offer and provide security compliance and auditing personnel a straightforward way to configure an environment for security and compliance based on “least privileges” across the AWS customer environment This helps align the services in a way that will make your environment secure and auditable real time verses within point in time or period in time • Access Management Create groups and roles like developers testers or administrators and provide them with their own unique credentials for accessing AWS cloud resources through the use of groups and roles • Network Segmentation Set up subnets in the cloud to separate environments (that should remain isolated from one another) For example to separate your development environment from your production environment and then configure network ACLs to control how traffic is routed between them Customers can also set up separate management environments to ensure security integrity through the use of a Bastion host for limiting direct access to Amazon Web Services – Introduction Secure by Design November 2015 11 of 14 production resources • Resource Constraints & Monitoring Establish hardened guest OS and services related to use of Amazon Elastic Compute Cloud (Amazon EC2) instances along with the latest security patches; perform backups of your data; and install antivirus and intrusion detection tools Deploy monitoring logging and notification alarms • Data Encryption Encrypt your data or objects when they’re stored in the cloud either by encrypting automatically on the cloud side or on the client side before you upload it Phase 3 – Enforce the Use of Templates After creating a “secure environment” you need to enforce its use in AWS You do this by enforcing Service Catalog Once you enforce the Service Catalog everyone with access to the account must create their environment using the CloudFormation templates you created Every time anyone uses the environment all those “secure environment” standard rules and/or constraints will be applied This effectively operationalizes the remaining customer account security configurations of controls and prepares you for audit readiness Phase 4 – Perform Validation Activities The goal of this phase is to ensure AWS customers can support an independent audit based on public generallyaccepted auditing standards Auditing standards provide a measure of audit quality and the objectives to be achieved when auditing a system built within an AWS customer environment AWS provides tooling to detect whether there are actual instances of noncompliance AWS Config gives you the pointintime current settings of your architecture You can also leverage AWS Config Rules a service that allows you to use your secure environment as the authoritative criteria to execute a sweeping check of controls across the environment You’ll be able to detect who isn’t encrypting who is opening up ports to the Internet and who has databases outside a production VPC Any measurable characteristic of any AWS resource in the AWS environment can be checked The ability to do a sweeping audit is especially valuable if you are working on an AWS account for which you did not first establish and enforce a secure environment This allows you to check the entire account no matter how it was Amazon Web Services – Introduction Secure by Design November 2015 12 of 14 created and audit it against your secure environment standard With AWS Config Rules you can also continually monitor it and the console will show you at any time which IT resources are and aren’t in compliance In addition you will know if a user was out of compliance even if for a brief period of time This makes pointintime and periodintime audits extremely effective Since auditing procedures differ across industry verticals AWS customers should review the audit guidance provided based on their industry vertical If possible engage audit organizations that are “cloudaware” and understand the unique audit automation capabilities that AWS provides Work with your auditor to determine if they have experience with auditing AWS resources; if they do not AWS provides several training options to address how to audit AWS services through an instructorled eighthour class including handson labs For more information please contact: awsaudittraining@amazoncom Additionally AWS provides several audit evidence gathering capabilities through secure read access along with unique API (Application Programming Interface) scripts which enable audit automation for evidence collection This provides auditors the ability to perform 100% audit testing (versus testing with a sampling methodology) SbD: How to Get Started Here are some starter resources for you to get you and your teams ramped up: • Take the selfpaced training on “Auditing your AWS Architecture” This will allow for hands on exposure to the features and interfaces of AWS in particular the configuration options that are available to auditors and security control owners • Request more information about how SbD can help email: awssecuritybydesign@amazoncom • Be familiar with additional relevant resources available to you: o Amazon Web Services: Overview of Security Processes o Introduction to Auditing the Use of AWS Whitepaper o Federal Financial Institutions Examination Council (FFIEC) Audit Guide Amazon Web Services – Introduction Secure by Design November 2015 13 of 14 o SEC Cybersecurity Initiative Audit Guide Further Reading • AWS Compliance Center: http://awsamazoncom/compliance • AWS Security by Design: http://awsamazoncom/compliance/securitybydesign • AWS Security Center: http://awsamazoncom/security • FedRAMP FAQ: http://awsamazoncom/compliance/fedramp • Risk and Compliance Whitepaper: https://d0awsstaticcom/whitepapers/compliance/AWS_Risk_and_Compliance_Whitepaperpdf • Security Best Practices Whitepaper: https://d0awsstaticcom/whitepapers/awssecuritybestpracticespdf • AWS Products Overview: http://awsamazoncom/products/ • AWS Sales and Business Development: https://awsamazoncom/compliance/contact/ • Government and Education on AWS https://awsamazoncom/governmenteducation/ • AWS Professional Services https://awsamazoncom/professionalservices
General
AWS_WellArchitected_Framework__Security_Pillar
ArchivedSecurity Pillar AWS Well Architected Framework July 2020 This paper has been archived The latest version is now available at: https://docsawsamazoncom/wellarchitected/latest/securitypillar/welcomehtmlArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Introduction 1 Security 2 Design Principles 2 Definition 3 Operating Your Workload Securely 3 AWS Account Management and Separation 5 Identity and Access Management 7 Identity Management 7 Permissions Management 11 Detection 15 Configure 15 Investigate 18 Infrastructure Protect ion 19 Protecting Networks 20 Protecting Compute 23 Data Protection 27 Data Classification 27 Protecting Dat a at Rest 29 Protecting Data in Transit 32 Incident Response 34 Design Goals of Cloud Response 34 Educate 35 Prepare 36 Simulate 38 Iterate 39 Conclusion 40 ArchivedContributors 40 Further Reading 41 Document Revisions 41 ArchivedAbstract The focus of this paper is the security pillar of the WellArchitected Framework It provides guidance to help you apply best practices current recommendations in the design delivery and maintenance of secure AWS workloads ArchivedAmazon Web Services Security Pillar 1 Introduction The AWS Well Architected Framework helps you understand trade offs for decisions you make while building workloads on AWS By using the Framework you will learn current architectural best practices for designing and operating reliable secure efficient and cost effective workloads in the cloud It provides a way fo r you to consistently measure your workload against best practices and identify areas for improvement We believe that having well architected workload s greatly increases the likelihood of business success The framework is based on five pillars: • Operation al Excellence • Security • Reliability • Performance Efficiency • Cost Optimization This paper focuses on the security pillar This will help you meet your business and regulatory requirements by following current AWS recommendations It’s intended for those in technology roles such as chief technology officers (CTOs) chief information security officers (CSOs/CISOs) architects developers and operations team members After reading this paper you will understand AWS current recommendations and strategies to use when designing cloud architectures with security in mind This paper doesn ’t provide implementation details or architectural patterns but does include references to appropriate resources for this information By adopting the practices in this paper you can build architectures that protect your data and systems control access and respond automatically to security events ArchivedAmazon Web Services Security P illar 2 Security The security pillar describes how to take advantage of cloud technologies to protect data systems and assets in a way that can improve your security posture This paper provides in depth best practice guidance for architecting secure workloads on AWS Design Principles In the cloud there are a number of principles that can help you strengthen your workload security: • Implement a strong identity foundation: Implement the principle of least privilege and enforce separation of duties with appropriate authorization for each interaction with your AWS resources Centralize identity management and aim to eliminate reliance on long term static credentials • Enable traceability: Monitor alert and audit act ions and changes to your environment in real time Integrate log and metric collection with systems to automatically investigate and take action • Apply security at all layers: Apply a defense in depth approach with multiple security controls Apply to all layers (for example edge of network VPC load balancing every instance and compute service operating system application and code) • Automate security best practices: Automated software based security mechanisms improve your ability to securely scale more rapidly and cost effectively Create secure architectures including the implementation of controls that are defined and managed as code in version controlled templates • Protect data in transit and at rest : Classify your data into sensitivity le vels and use mechanisms such as encryption tokenization and access control where appropriate • Keep people away from data: Use mechanisms and tools to reduce or eliminate the need for direct access or manual processing of data This reduces the risk of mishandling or modification and human error when handling sensitive data • Prepare for security events: Prepare for an incident by having incident management and investigation policy and processes that align to your organizational requirements Run incident response simulations and use tools with automation to increase your speed for detection investigatio n and recovery ArchivedAmazon Web Services Security Pillar 3 Definition Security in the cloud is composed of five areas: 1 Identity and access management 2 Detection 3 Infrastructure protection 4 Data protection 5 Incident response Security and Compliance is a shared responsibility between AWS and you the customer This shared model can help re duce your operational burden You should carefully examine the services you choose as your responsibilities vary depending on the services used the integration of those services into your IT environment and applicable laws and regulations The nature of this shared responsibility also provides the flexibility and control that permits the deployment Operating Your Workload Securely To operate your workload securely you must apply overarching best practices to every area of security Take requirements and processes that you have defined in operational excellence at an organizational and workload level and apply them to all areas Staying up to date with AWS and industry recommendations and threat intelligence helps you evolve your threat model and control objectives Automating security processes testing and validation allow you to scale your security operations Identify and prioritize risks using a threat model: Use a threat model to identify and maintain an up todate register of potential threats Prioritize your threats and adapt your security controls to prevent detect and respond Revisit and maintain this in the context of the evolving security landscape Identify and validate control objectives: Based on yo ur compliance requirements and risks identified from your threat model derive and validate the control objectives and controls that you need to apply to your workload Ongoing validation of control objectives and controls help you measure the effectivenes s of risk mitigation Keep up to date with security threats: Recognize attack vectors by staying up to date with the latest security threats to help you define and implement appropriate controls ArchivedAmazon Web Services Security Pillar 4 Keep up to date with security recommendations : Stay up to date with both AWS and industry security recommendations to evolve the security posture of your workload Evaluate and implement new security services and features regularly: Evaluate and implement security services and features from AWS and APN Partners that allow you to evolve the security posture of your workload Automate testing and validation of security controls in pipelines: Establish secure baselines and templates for security mechanisms that are tested and validated as part of your build pipelines and processes Use tools and automation to test and validate all security controls continuously For example scan items such as machine images and infrastructure as code templates for security vulnerabilities irregularities and drift from an established baseline at each stage Reducing the number of security misconfigurations introduced into a production environment is critical —the more quality control and reduction of defects you can perform in the build process the better Design continuous integration and continuous deployment (CI/CD) pipelines to test for security issues whenever possible CI/CD pipelines offer the opportunity to enhance security at each stage of build and delivery CI/CD security tooling must also be kept updated to mitigate evolving threats Resources Refer to the following resources to learn more about operating your workload securely Videos • Security Best Practices the Well Architected Way • Enable AWS adoption at scale with automation and governance • AWS Security Hub: Manage Security Alerts & Automate Compliance • Automate your security on AWS Documentation • Overview of Security Processes • Security Bulletins • Security Blog • What's New with AWS • AWS Security Audit Guidelines ArchivedAmazon Web Services Security Pillar 5 • Set Up a CI/CD Pipeline on AWS AWS Account Management and Separation We recommend that you organize workloads in separate accounts and group accounts based on function compliance requirements or a common set of controls rather than mirroring your organization’s reporting structure In AWS accounts are a hard boundary zero trust container for your resources For example account level separation is strongly recommended for isolating production workloads from development and test workloads Separate workloads using accounts: Start with security and infrastructure in mind to enable your organization to set common guardrails as your workloads grow This approach provides b oundaries and controls between workloads Account level separation is strongly recommended for isolating production environments from development and test environments or providing a strong logical boundary between workloads that process data of different sensitivity levels as defined by external compliance requirements (such as PCI DSS or HIPAA) and workloads that don’t Secure AWS accounts: There are a number of aspects to securing your AWS accounts including the securing of and not using the root user and keeping the contact information up to date You can use AWS Organizations to centrally ma nage and govern your accounts as you grow and scale your workloads AWS Organizations helps you manage accounts set controls and configure services across your accounts Manage accounts centrally : AWS Organizations automates AWS account creation and management and control of those accounts after they are created When you create an account through AWS Organizations it is important to consider the email address you use as this will be the root user that allows the password to be reset Organizations allows you to group accounts into organizational units (OUs) which can represent different environments based on the workload’s requirements and purpose Set controls centrally : Control what your AWS accounts can do by only allowing specific services Regions and service actions at the appropriate level AWS Organi zations allows you to use service control policies (SCPs) to apply permission guardrails at the organization organizational unit or account level which apply to all AWS Identity and Access Management (IAM) users and roles For example you can apply an SCP that restricts users from launching resources in Regions that you have not explicitly allow ed AWS Control Tower offers a simplified way to set up and govern multiple accounts It automates the setu p of accounts in your AWS Organization ArchivedAmazon Web Services Security Pillar 6 automates provisioning applies guardrails (which include prevention and detection ) and provides you with a dashboard for vis ibility Configure services and resources centrally : AWS Organizations helps you configure AWS services that apply to all of your accounts For example you can configure central logging of all actions performed across your organization using AWS CloudTrail and prevent member account s from disabling logging You can also centrally aggregate data for rules that you’ve defined using AWS Config enabling you to audit your workloads for compliance and react quickly to changes AWS CloudFormation StackSets allow you to centrally manage AWS CloudFormation stacks across accounts and OUs in your organization This allows you to automatically provision a new account to meet your security requirements Resources Refer to the following resources to learn mo re about AWS recommendations for deploying and managing multiple AWS accounts Videos • Managing and governing multi account AWS environments using AWS Organizations • AXA: Scaling adoption with a Global Landing Zone • Using AWS Control Tower to Govern Multi Account AWS Environments Documentation • Establishing your best practice AWS environment • AWS Organizations • AWS Control Tower • Working with AWS CloudFormation StackSet s • How to use service control policies to set permission guardrails across accounts in your AWS Organization Hands on • Lab: AWS Account and Root User ArchivedAmazon Web Services Security Pillar 7 Identity and Access Management To use AWS services you must grant your users and applications access to resources in your AWS accounts As you run more workloads on AWS you need robust identity management and permissions in place to ensure that the right people have access to the righ t resources under the right conditions AWS offers a large selection of capabilities to help you manage your human and machine identities and their permissions The best practices for these capabilities fall into two main areas : • Identity management • Permiss ions management Identity Management There are two types of identities you need to manage when approaching operating secure AWS workloads • Human Identities : The administrators developers operators and consumers of your applications require an identity to access your AWS environments and applications These can be members of your organization or external users with whom you collaborate and who interact with your AWS resources via a web browser client application mobile app or interactive command line tools • Machine Identities : Your workload applications operational tools and components require an identity to make requests to AWS services for example to read data These identities include machines running in your AWS environment such as Amazon EC2 instances or AWS Lambda functions You can also manage machine identities for external parties who need access Additionally you might also have machines outside of AWS that need access to your AWS environment Rely on a centralized identity provider: For workforce identities rely on an identity provider that enables you to manage identities in a centralized place This makes it easier to manage access across multiple applications and services because you are creat ing manag ing and revok ing access from a single location For example if someone leaves your organization you can revoke access for all applications and services (including AWS ) from one location This reduces the need for multiple credentials and provides an opportunity to integrate with existing human resources (HR) processes ArchivedAmazon Web Services Security Pillar 8 For federation with individual AWS accounts you can use centralized identities for AWS with a SAML 20 based provider with AWS IAM You can use any provider —whether hosted by you in AWS external to AWS or supplied by the AWS Partner Network (APN) —that is compatible with the SAML 20 protocol You can use federation between your AWS account and your chosen provider to grant a user or application access to call AWS API operations by using a SAML assertion to get temporary security credentials Web based single sign on is also supported allowing users to sign in to the AWS Management Console from your sign i n portal For federation to multiple accounts in your AWS Organization you can configure your identity source in AWS Single Sign On (AWS SSO) and specify where your users and groups are stored Once configured your identity provider is your source of truth and information can be synchronized using the System for Cross domain Identity Management (SC IM) v20 protocol You can then look up users or groups and grant them single sign on access to AWS accounts cloud applications or both AWS SSO integrates with AWS Organizations which enables you to configure your identity provider once and then grant access to existing and new accounts managed in your organization AWS SSO provides you with a default store which you can use to manage your users and groups If yo u choose to use the AWS SSO store create your users and groups and assign their level of access to your AWS accounts and applications keeping in mind the best practice of least privilege Alternatively you can choose to Connect to Your External Identity Provider using SAML 20 or Connec t to Your Microsoft AD Directory using AWS Directory Service Once configured you can sign into the AWS Management Console command line interface or the AWS mobile app by authenticating through your central identity provider For managing end users or consumers of your workloads such as a mobile app you can use Amazon Cognito It provides authentication authorization and user management for your web and mobile apps Your users can sign in directly with a user name and password or through a third party such as Amazon Apple Facebook or Google Leverage user groups and attributes: As the number of users you manage grows you will need to determine ways to organize them so that you can manage them at scale Place users with common security requirements in groups defined by your identity provider and put mechanisms in place to ensure that user attributes that may be used for access control ( for example department or location) are correct and updated Use these groups and attributes to control access rather than individual users This allows you to manage access centrally by changing a user’s group membership or ArchivedAmazon Web Services Security Pillar 9 attributes once with a permission set rather than updating many individual policies when a user’s access needs change You can use AWS SSO to manage user groups and attributes AWS SSO supports most commonly used attributes whether they are entered manually during user creation or automatically provi sioned using a synchronization engine such as defined in the System for Cross Domain Identity Management (SCIM) specification Use strong sign in mechanisms: Enforce minimum password length and educate your users to avoid common or reused passwords Enfo rce multi factor authentication (MFA) with software or hardware mechanisms to provide an additional layer of verification For example when using AWS SSO as the identity source configure the “context aware” or “always on” setting for MFA and allow users to enroll their own MFA devices to accelerate adoption When using an external identity provider (IdP) configure your IdP for MFA Use temporary credentials: Require ide ntities to dynamically acquire temporary credentials For workforce identities use AWS SSO or federation with IAM to access AWS accounts For machine ident ities such as EC2 instances or Lambda functions require the use of IAM roles instead of IAM users with long term access keys For human identities using the AWS Management Console require users to acquire temporary credentials and federate into AWS Yo u can do this using the AWS SSO user portal or configuring federation with IAM For users requ iring CLI access ensure that they use AWS CLI v2 which supports di rect integration with AWS Single Sign On (AWS SSO) Users can create CLI profiles that are linked to AWS SSO accounts and roles The CLI automatically retrieves AWS credentials from AWS SSO and refreshes them on your behalf This eliminates the need to cop y and paste temporary AWS credentials from the AWS SSO console For SDK users should rely on AWS STS to assume roles to receive temporary credentials In certain cases temporary credentials might not be practical You should be aware of the risks of stor ing access keys rotate these often and require MFA as a condition when possible For cases where you need to grant consumers access to your AWS resource s use Amazon Cognito identity pools and assign them a set of temporary limited privilege credentials to access your AWS resources The permissions for each us er are controlled through IAM roles that you create You can define rules to choose the role for each user based on claims in the user's ID token You can define a default role for authenticated users You can also define a separate IAM role with limited permissions for guest users who are not authenticated ArchivedAmazon Web Services Security Pillar 10 For machine identities you should rely on IAM roles to grant access to AWS For EC2 instances you can use roles for Amazon EC2 You can attach an IAM role to your EC2 instance to enable your applications running on Amazon EC2 to use temporary security credentials that AWS crea tes distributes and rotates automatically For accessing EC2 instances using keys or passwords AWS Systems Manager is a more secure way to access a nd manage your instances using a pre installed agent without the stored secret Additionally other AWS services such as AWS Lambda enable you to configure an IAM service role to grant the service permissions to perform AWS actions using temporary creden tials Audit and rotate credentials periodically: Periodic validation preferably through an automated tool is necessary to verify that the correct controls are enforced For human identities you should require users to change their passwords periodicall y and retire access keys in favor of temporary credentials We also recommend that you continuously monitor MFA settings in your identity provider You can set up AWS Config Rules to monitor these settings For machine identities you should rely on temporary credentials using IAM roles For situations where this is not possible frequent auditing and rotating access keys is necessary Store and use secrets secure ly: For credentials that are not IAM related such as database login s use a service that is designed to handle management of secrets such as AWS Secrets Manager AWS Secrets Manager makes it easy to manage rotat e and securely store encrypted secrets using supported services Calls to access the secrets are logged in CloudTrail for auditing purposes and IAM permissions can grant least privilege access to them Resources Refer to the following resources to learn more about AWS best practices for protecting your AWS credentials Videos • Mastering identity at every layer of the cake • Managing user permissions at scale with AWS SSO • Best Practices for Managing Retrieving & Rotating Secrets at Scale Documentation • The AWS Account Root User ArchivedAmazon Web Services Security Pillar 11 • AWS Account Root User Credentials vs IAM User Creden tials • IAM Best Practices • Setting an Account Password Pol icy for IAM Users • Getting Started with AWS Secrets Manager • Using Instance Profiles • Temporary Security Credentials • Identity Providers and Federation Permissions Management Manage permissions to control access to people and machine identities that require access to AWS and your workloads Permissions control who can access what and under what conditions Set permissions to specific human and machine identities to grant acces s to specific service actions on specific resources Additionally specify conditions that must be true for access to be granted For example you can allow developers to create new Lambda functions but only in a specific Region When managing your AWS en vironments at scale adhere to the following best practices to ensur e that identities only have the access they need and nothing more Define permission guardrails for your organization: As you grow and manage additional workloads in AWS you should separa te these workloads using accounts and manage those accounts using AWS Organizations We recommend that you establish common permission guardrails that restrict access to all identities in your organization For example you can restrict access to specific AWS Regions or prevent your team from deleting common resources such as an IAM role used by your central security team You can get started by implementing example service control policies such as preventing users from disabling key services You can use AWS Organizations to group accounts and set common controls on each group of accounts To set these common controls you can use services in tegrated with AWS Organizations Specifically you can use service control policies (SCPs) to r estrict access to group of accounts SCPs use the IAM policy language and enable you to establish controls that all IAM principals (users and roles) adhere to You can restrict access to specific service actions resources and based on specific condition t o meet the access control needs of your organization If necessary you can define exceptions ArchivedAmazon Web Services Security Pillar 12 to your guardrails For example you can restrict service actions for all IAM entities in the account except for a specific administrator role Grant least privil ege access: Establishing a principle of least privilege ensures that identities are only permitted to perform the most minimal set of functions necessary to fulfill a specific task while balancing usability and efficiency Operating on this principle limits unintended access and help s ensure that you can audit who has access to which resources In AWS identities have no permissions by default wi th the exception of the root user which should only be used for a few specific tasks You use policies to explicitly grant permissions attached to IAM or resou rce entities such as an IAM role used by federated identities or machines or resources ( for example S3 buckets) When you create and attach a policy you can specify the service actions resources and conditions that must be true for AWS to allow acces s AWS supports a variety of conditions to help you scope down access For example using the PrincipalOrgID condition key the identifier of the AWS Or ganizations is verified so access can be granted within your AWS Organization You can also control requests that AWS services make on your behalf like AWS CloudFormation creating an AWS Lambda function by using the CalledVia condition key This enables you to set granular permissions for your human and machine identities across AWS AWS also has capabilities that enable you to scale your permissions management and adhere to least privilege Permissions Boundaries : You can use permission boundaries to set the maximum permissions that an administrator can set This enables you to delegate the abili ty to create and manage permissions to developers such as the creation of an IAM role but limit the permissions they can grant so that they cannot escalate their privilege using what they have created Attribute based access control (ABAC) : AWS enables you to grant permissions based on attributes In AWS these are called tags Tags can be attached to IAM principals (users or roles) and to AWS resources Using IAM policies administrators can create a reusable policy that applies permissions based on the attributes of the IAM principal For example as an administrator you can use a single IAM policy that grants developers in your organization access to AWS resources that match the develop ers’ project tags As the team of developers adds resources to projects permissions are automatically applied based on attributes As a result no policy update is required for each new resource ArchivedAmazon Web Services Security Pillar 13 Analyze public and cross account access: In AWS you can gr ant access to resources in another account You grant direct cross account access using policies attached to resources ( for example S3 bucket policies) or by allowing an identity to assume an IAM role in another account When using resource policies you want to ensure you grant access to identities in your organization and are intentional about when you make a resource public Making a resource public should be used sparingly as this action allows anyone to access the resource IAM Access Analyzer uses mathematical methods ( that is provable security ) to identity all access paths to a resource from outside of its account It reviews resource policies continuously and reports findings of public and cross account access to make it eas y for you to analyze potentially broad access Share resources securely: As you manage workloads using separate accounts there will be cases w here you need to share resources between those accounts We recommend that you share resources using AWS Resource Access Manager ( AWS RAM) This service enables you to easily and securely share AWS resources with in your AWS Organization and Organizational Units Using AWS RAM access to shared resources is automatically granted or revoked as accounts are moved in and out of the Organization or Organization Unit with which they are shared This helps you ensure that resources are only shared with the accounts that you intend Reduce permissions continuously: Sometime s when teams and projects are just getting started you might choose to grant broad access to inspire innovation and agility We recommend that you evaluate access continuously and restrict access to only the permissions required and achieve least privilege AWS provides access analysis capabilities to help you identify unuse d access To help you identify unused users and roles AWS analyzes access activity and provides access key and role last used information You can use the last accessed timestamp to identify unused users and roles and remove them Moreover you can review service and action last accessed information to identify and tighten permissions for specific users and roles For example you can use last accessed information to identify the specific S3 actions that your application role requires and restrict access to only those These feature are available in the console and programmatically to enable you to incorporate them into your infrastructure workflows and automated tools Establish e mergency access process: You should have a process that allows emergency access to your workload in particular your AWS accounts in the unlikely event of an automated process or pipeline issue This process could include a combination of different capabi lities for example an emergency AWS cross account ArchivedAmazon Web Services Security Pillar 14 role for access or a specific process for administrators to follow to validate and approve an emergency request Resources Refer to the following resources to learn more about current AWS best practices for finegrained authorization Videos • Become an IAM Policy Master in 60 Minutes or Less • Separation of Duties Least Privilege Delegation & CI/CD Documentation • Grant least privilege • Working with Policies • Delegating Permissions to Administer IAM Users Groups and Credentials • IAM Access Analyze r • Remove unnecessary credentials • Assuming a role in the CLI with MFA • Permissions Boundaries • Attribute based access control (ABAC) Hands on • Lab: IAM Permission Boundaries Delegating Role Creation • Lab: IAM Tag Based Access Control for EC2 • Lab: Lambda Cross Account IAM Role Assumption ArchivedAmazon Web Services Security Pillar 15 Detection Detection enables you to identify a potential security misconfiguration threat or unexpec ted behavior It’s an essential part of the security lifecycle and can be used to support a quality process a legal or compliance obligation and for threat identification and response efforts There are different types of detection mechanisms For exampl e logs from your workload can be analyzed for exploits that are being used You should regularly review the detection mechanisms related to your workload to ensure that you are meeting internal and external policies and requirements Automated alerting an d notifications should be based on defined conditions to enable your teams or tools to investigate These mechanisms are important reactive factors that can help your organization identify and understand the scope of anomalous activity In AWS there are a number of approaches you can use when addressing detective mechanisms The following sections describe how to use these approaches: • Configure • Investigate Configure Configure service and application logging : A foundational practice is to establish a set of detection mechanisms at the account level This base set of mechanisms is aimed at recording and detecting a wide range of actions on all resources in your account They allow you to build out a comprehensive detective capability with options that include automated remediation and partner integrations to add functionality In AWS services in this base set include: • AWS CloudTrail provides event history of your AWS account activity including actions taken through the AWS Management Console AWS SDKs command line tools and other AWS services • AWS Config monitors and records your AWS resource configurations and allows you to automate the evaluation and remedia tion against desired configurations • Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts and workloads ArchivedAmazon Web Services Security Pillar 16 • AWS Security Hub provides a single place that aggregates organizes and prioritizes your security alerts or findings from multiple AWS services and optional third party products to give you a comprehens ive view of security alerts and compliance status Building on the foundation at the account level many core AWS services for example Amazon Virtual Private Cloud (VPC) provide service level logging features VPC Flow Logs enable you to capture information about the IP traffic going to and from network interfaces that can provide valuable in sight into connectivity history and trigger automated actions based on anomalous behavior For EC2 instances and application based logging that doesn’t originate from AWS services logs can be stored and analyzed using Amazon CloudWatch Logs An agent collects the logs from the operating system and the applications that are running and automatically stores them Once the logs are available in CloudWatch Logs you can process them in real time or dive into analysis using Insights Equally important to collecting and aggregating logs is the ability to extract meaningful insight from the great volumes of log and event data generated by complex architectures See the Monitoring section of The Reliability Pillar whitepaper for more detail Logs can themselves contain data that is considered sensitive –either when application data has erroneously found its way into log files that the CloudWatch Logs agent is capturing or when cross region logging is configured for log aggregation and there are legislative considerations about shipping certain kinds of information acr oss borders One approach is to use Lambda functions triggered on events when logs are delivered to filter and redact log data before forwarding into a central logging location such as an S3 bucket The unredacted logs can be retained in a local bucket until a “reasonable time” has passed (as determined by legislation and your legal team) at which point an S3 lifecycle rule can automatically delete them Logs can further be protected in Amazon S3 by using S3 Object Lock where you can store objects using a write once readmany (WORM) model Analyze logs findings and metrics centrally : Security operations teams rely on the collection of logs and the use of search tools to d iscover potential events of interest which might indicate unauthorized activity or unintentional change However simply analyzing collected data and manually processing information is insufficient to keep up with the volume of information flowing from co mplex architectures Analysis and reporting alone don’t facilitate the assignment of the right resources to work an event in a timely fashion ArchivedAmazon Web Services Security Pillar 17 A best practice for building a mature security operations team is to deeply integrate the flow of security events and findings into a notification and workflow system such as a ticketing system a bug/issue system or other security information and event management (SIEM) system This takes the workflow out of email and static reports and allows you to route escalate and manage events or findings Many organizations are also integrating security alerts into their chat/collaboration and developer producti vity platforms For organizations embarking on automation an API driven low latency ticketing system offers considerable flexibility when planning “what to automate first” This best practice applies not only to security events generated from log messag es depicting user activity or network events but also from changes detected in the infrastructure itself The ability to detect change determine whether a change was appropriate and then route that information to the correct remediation workflow is esse ntial in maintaining and validating a secure architecture in the context of changes where the nature of their undesirability is sufficiently subtle that their execution cannot currently be prevented with a combination of IAM and Organizations configuratio n GuardDuty and Security Hub provide aggregation deduplication and analysis mechanisms for log records that are also made available to you via other AWS services Specifically GuardDuty ingests aggregates and analyses information from the VPC DNS service and inf ormation which you can otherwise see via CloudTrail and VPC Flow Logs Security Hub can ingest aggregate and analyze output from GuardDuty AWS Config Amazon Inspector Macie AWS Firewall Manager and a significant number of thirdparty security produc ts available in the AWS Marketplace and if built accordingly your own code Both GuardDuty and Security Hub have a Master Member model that can aggregate findings and insights across multiple accounts and Security Hub is often used by customers who have an on premises SIEM as an AWS side log and alert preprocessor and aggregator from which they can then ingest Amazon EventBridge via a Lambda based processor and forwarder Resources Refer to the following resources to learn more about current AWS recomme ndations for capturing and analyzing logs Videos • Threat management in the cloud: Amazon GuardDuty & AWS Security Hub • Centrally Monitoring Resource Configuration & Co mpliance ArchivedAmazon Web Services Security Pillar 18 Documentation • Setting up Amazon GuardDuty • AWS Security Hu b • Getting started: Amazon CloudWatch Logs • Amazon EventBridge • Configuring Athena to analyze CloudTrail logs • Amazon CloudWatch • AWS Config • Creating a trail in CloudTrail • Centralize logging solution Hands on • Lab: Enable Security Hub • Lab: Automated Deployment of Detective Controls • Lab: Amazon GuardDuty hands on Investigate Implement actionab le security events : For each detective mechanism you have you should also have a process in the form of a runbook or playbook to investigate For example when you enable Amazon GuardDuty it generates different findings You shou ld have a runbook entry for each finding type for example if a trojan is discovered your runbook has simple instructions that instruct someone to investigate and remed iate Automate response to events : In AWS investigating events of interest and information on potentially unexpected changes into an automated workflow can be achieved using Amazon EventBridge This serv ice provides a scalable rules engine designed to broker both native AWS event formats (such as CloudTrail events) as well as custom events you can generate from your application Amazon EventBridge also allows you to route events to a workflow system for those building incident response systems (Step Functions) or to a central Security Account or to a bucket for further analysis ArchivedAmazon Web Services Security Pil lar 19 Detecting change and routing this information to the correct workflow can also be accomplished using AWS Config rules AWS Con fig detects changes to in scope services (though with higher latency than Amazon EventBridge) and generates events that can be parsed using AWS Config rules for rollback enforcement of compliance policy and forwarding of information to systems such as c hange management platforms and operational ticketing systems As well as writing your own Lambda functions to respond to AWS Config events you can also take advantage of the AWS Config Rules Developme nt Kit and a library of open source AWS Config Rules Resources Refer to the following resources to learn more about current AWS best practices for integrating auditing controls with notification and workflow Videos • Amazon Detective • Remediating Amazon GuardDuty and AWS Security Hub Findings • Best Practices for Managing Security Operations on AWS • Achieving Continuous Compliance using AWS Config Documentation • Amazon Detective • Amazon EventBridge • AWS Config Rules • AWS Config Rules Repository (open source) • AWS Config Rules Development Kit Hands on • Solution: RealTime Insights on AWS Account Activity • Solution: Centralized Logging Infrastructure Protection Infrastructure protection encompasses control methodologies such as defense in depth that are necessary to meet best practices and organizational or regulatory ArchivedAmazon Web Services Security Pillar 20 obligations Use of these methodologies is critical for successful ongoing operations in the clo ud Infrastructure protection is a key part of an information security program It ensures that systems and services within your workload are protected against unintended and unauthorized access and potential vulnerabilities For example you’ll define tr ust boundaries (for example network and account boundaries) system security configuration and maintenance (for example hardening minimization and patching) operating system authentication and authorizations (for example users keys and access levels ) and other appropriate policy enforcement points (for example web application firewalls and/or API gateways) In AWS there are a number of approaches to infrastructure protection The following sections describe how to use these approaches: • Protecting networks • Protecting compute Protecting Networks The careful planning and management of your network design forms the foundation of how you provide isolation and boundaries for resources within your workload Because many resources in your workload operate in a VPC and inherit the security properties it’s critical that the design is supported with inspection and protection mechanisms backed by automation Likewise for workloads that operate outside a VPC using purely edge services and/or serverless the b est practices apply in a more simplified approach Refer to the AWS Well Architected Serverless Applications Lens for specific guidance on serverless secur ity Create network layers: Components such as EC2 instances RDS database clusters and Lambda functions that share reachability requirements can be segmented into layers formed by subnets For example a n RDS database cluster in a VPC with no need for in ternet access should be placed in subnets with no route to or from the internet This layered approach for the control s mitigate s the impact of a single layer misconfiguration which could allow unintended access For AWS Lambda you can run your functions in your VPC to take advance of VPCbased controls For network connectivity that can include thousands of VPCs AWS accounts and on premises networks you should use AWS Transit Gateway It acts as a hub that controls how traffic is routed among all the connected networks which act like spokes Traffic ArchivedAmazon Web Services Security Pillar 21 between an Amazon VPC and AWS Transit Gateway remains on the AWS private network which reduces external threat vectors such as distributed denial of service (DDoS) attacks and common exploits such as SQL injection cross site scripting cross site request forgery or abuse of broken authentication code AWS Transit Gateway interregion peering also encrypts inter region traffic with no single point of failure or bandwidth bottleneck Control traffic a t all layers: When architecting your network topology you should examine the connectivity requirements of each component For example if a component requires internet accessib ility (inbound and outbound) connectivity to VPCs edge services and external data centers A VPC allows you to define your network topology that spans an AWS Region with a private IPv4 address range that you set or an IPv6 address range AWS selects You should a pply multiple controls with a defense in depth approach for both in bound and outbound traffic including the use of security groups (stateful inspection firewall) Network ACLs subnets and route tables Within a VPC you can create subnets in an Availability Zone Each subnet can have an associated route table that defin es routing rules for managing the paths that traffic takes within the subnet You can define an internet routable subnet by having a route that goes to an internet or NAT gateway attached to the VPC or through another VPC When an instance RDS database or other service is launched within a VPC it has its own security group per network interface This firewall is outside the operating system layer and can be used to define rules for allowed inbound and outbound traffic You can also define relationships between security groups For example instances within a database tier security group only accept traffic from instances within the application tier by reference to the security groups applied to the instances involved Unless you are using non TCP proto cols it should n’t be necessary to have an EC2 instance directly accessible by the internet (even with ports restricted by security groups) without a load balancer or CloudFront This helps protect it from unintended access through an operating system or application issue A subnet can also have a network ACL attached to it which acts as a stateless firewall You should configure the network ACL to narrow the scope of traffic allowed between layers note that you need to define both inbound and outbound rules While some AWS services require components to access the internet to make API calls (this being where AWS API endp oints are located ) others use endpoints within your VPCs Many AWS services including Amazon S3 and DynamoDB support VPC endpoints and this technology has been general ized in AWS PrivateLink For VPC ArchivedAmazon Web Services Security Pillar 22 assets that need to make outbound connections to the internet these can be made outbound only (one way) through an AWS managed NAT gateway outbound only internet gateway or web proxies that you create and manage Impleme nt inspection and protection: Inspect and filter your traffic at each layer For components transacting over HTTP based protocols a web application firewall can help protect from common attacks AWS WAF is a web a pplication firewall that lets you monitor and block HTTP(s) requests that match your configurable rules that are forwarded to an Amazon API Gateway API Amazon CloudFront or an Application Load Balancer To get started with AWS WAF you can use AWS Managed Rules in combination with your own or use existing partner integrations For managing AWS WAF AWS Shield Advanced protections and Amazon VPC security groups across AWS Organizations you can use AWS Firewall Manager It allows you to centrally configure and manage firewall rules across your accounts and applications mak ing it easier to scale enforcement of common rules It also enables you to rapidly respond to attacks using AWS Shield Advanced or solutions that can automatically block unwanted requests to your web applications Automate network protection: Automate protection mechanisms to provide a self defending network based on threat intelligence and anomaly detection For example intrusion detection and prevention tools that can adapt to current threats and reduce their impact A web application firewall is an example of where you can automate network protection for example by using the AWS WAF Security Automations solution (https://githubcom/awslabs/aws wafsecurity automations ) to automatically b lock requests originating from IP addresses associated with known threat actors Resources Refer to the following resources to learn more about AWS best practices for protecting networks Video • AWS Transit Gatew ay reference architectures for many VPCs • Application Acceleration and Protection with Amazon CloudFront AWS WAF and AWS Shield • DDoS Attack Detection at Scale ArchivedAmazon Web Services Security Pillar 23 Docume ntation • Amazon VPC Documentation • Getting started with AWS WAF • Network Access Control Lists • Security Groups for Your VPC • Recommended Network ACL Rules for Your VPC • AWS Firewall Manager • AWS PrivateLink • VPC Endpoints • Amazon Inspector Hands on • Lab: Automated Deployment of VPC • Lab: Automated Deployment of Web Application Firewall Protecting Compute Perform vulnerability management : Frequently scan and patch for vulnerabilities in your code dependencies and in your infrastructure to help protect against new threats Using a build and deployment pipeline you can automate many parts of vulnerability management : • Using thirdparty st atic code analysis tools to identify common security issues such as unchecked function input bounds as well as more recent CVEs You can use Amazon CodeGuru for languages supported • Using thirdparty depend ency checking tools to determine whether libraries your code links against are the latest versions are themselves free of CVEs and have licensing conditions that meet your software policy requirements ArchivedAmazon Web Services Security Pillar 24 • Using Amazon Inspector you can perform configurati on assessments against your instances for known common vulnerabilities and exposures (CVEs) assess against security benchmarks and fully automate the notification of defects Amazon Inspector runs on production instances or in a build pipeline and it notifies developers and engineers when findings are present You can access findings programmatically and direct your team to backlogs and bug tracking systems EC2 Image Builder can be used to maintain s erver images (AMIs) with automated patching AWS provided security policy enforcement and other customizations • When using containers implement ECR Image Scanning in your build pipeline and on a regular basis against your image repository to look for CVEs in your containers • While Amazon Inspector and other tools are effective at identifying configurations and any CVEs that are present other methods are required to test your workload at the application level Fuzzing is a well known method of finding bugs using automation to inject malformed data into input fields and other areas of your application A number o f these functions can be performed using AWS services products in the AWS Marketplace or open source tooling Reduce attack surface: Reduce your attack surface by hardening operating systems minimizing components libraries and externally consumable se rvices in use To reduce your attack surface you need a threat model to identify the entry points and potential threats that could be encountered A common practice in reducing attack surface is to start at reducing unused components whether they are operating system p ackages applications etc (for EC2 based workloads) or external software modules in your code (for all workloads) Many hardening and security configuration guides exist for common operating systems and server software for example from the Center for Internet Security that you can use as a starting point and iterate Enable people to perform actions at a distance: Removing the ability for interactive access reduces the risk of human error and the potential for manual configuration or management For example use a change management workflow to manage EC2 instances using tools such as AWS Systems Manager instead of allowing direct access or via a bastion host AWS Systems Manager can automate a variety of maint enance and deployment tasks using features including automation workflows documents (playbooks) and the run command AWS CloudFormation stacks build from pipelines ArchivedAmazon Web Services Security Pillar 25 and can automate your infrastructure deployment and management tasks without using the AWS Management Console or APIs directly Implement managed services: Implement services that manage resources such as Amazon RDS AWS Lambda and Amazon ECS to reduce your security maintenance tasks as part of the shared responsibility model For example Amazon RDS helps you set up operate and scale a relational database automates administration tasks such as hardware provisioning database setup patching and backups This means you have mo re free time to focus on securing your application in other ways described in the AWS Well Architected Framework AWS Lambda lets you run code without provisioning or managing servers so you only need to focus on the connectivity invocation and security at the code level –not the infrastructure or operating system Validate software integrity : Implement mechanisms (eg code signing) to validate that the software code and libraries used in the workload are from trusted sources and have not been tampered with For example you should verify the code signing certificate of binaries and scripts to confirm the author and ensure it has not been tampered with since created by the author Additionally a checksum of software that you download compared to that of the checksum from the provider can help ensure it has not been tampered with Automate compute protection: Automate your protective compute mechanisms including vulnerability management reduction in attack surface and management of resources The au tomation will help you invest time in securing other aspects of your workload and reduce the risk of human error Resources Refer to the following resources to learn more about AWS best practices for protecting compute Video • Security best practices for the Amazon EC2 instance metadata service • Securing Your Block Storage on AWS • Securing Serverless and Container Services • Running high security workloads on Amazon EKS • Architecting Security through Policy Guardrails in Amazon EKS ArchivedAmazon Web Services Security Pillar 26 Documentation • Security Overview of AWS Lambda • Security in Amazon EC2 • AWS Systems Manager • Amazon Inspector • Writing your own AWS Systems Manager documents • Replacing a Bastion Host with Amazon EC2 Systems Manager Hands on • Lab: Automated Deployment of EC2 Web Application ArchivedAmazon Web Services Security Pillar 27 Data Protection Before architecting any workload foundational practices that influence security should be in place For example data classification provides a way to categorize data based on levels of sensitivity and encryption protects data by way of render ing it unintelligible to unauthorized access These methods are important because they support objectives such as preventing mishandling or complying with regulatory obligations In AWS there are a number of different approaches you can use when addressin g data protection The following section describes how to use these approaches: • Data classification • Protecting data at rest • Protecting data in transit Data Classification Data classification provides a way to categorize organizational data based on critica lity and sensitivity in order to help you determine appropriate protecti on and retention controls Identify the data within your workload : You need to understand the type and classiciation of data your workload is processing the associated business proce sses data owner applicable legal and compliance requirements where it’s stored and the resulting controls that are needed to be enforced This may include classifications to indicate if the data is intended to be publicly available if the data is inte rnal use only such as customer personally identifiable information (PII) or if the data is for more restricted access such as intellectual property legally privileged or marked sensititve and more By carefully managing an appropriate data classificatio n system along with each workload’s level of protection requirements you can map the controls and level of access/protection appropriate for the data For example public content is available for anyone to access but important content is encrypted and s tored in a protected manner that requires authorized access to a key for decrypting the content Define data protection controls: By using resource tags separate AWS accounts per sensitivity (and potentially also per caveat / enclave / community of intere st) IAM policies Organizations SCPs AWS KMS and AWS CloudHSM you can define and implement your policies for data classification and protection with encryption For example if you have a project with S3 buckets that contain highly critical data or EC2 ArchivedAmazon Web Services Security Pillar 28 instances that process confidential data they can be tagged with a “Project=ABC” tag Only your immediate team knows what the project code means and it provides a way to use attribute based access control You can define levels of access to the AWS KMS encryption keys through key policies and grants to ensure that only appropriate services have access to the sensitive content through a secure mechanism If you are making authorization decisions based on tags you should make sure that the permissions on the tags are defined appropriately using tag policies in AWS Organizations Define data lifecycle management: Your defined lifecycle strategy should be based on sensitivity level as well as legal and organization requirements Aspects including the duration for which you retain data data destruction processes data access management data transformation and data sharing should be considered When choosing a data classification methodology balance usability versus access You should also accommodate the mu ltiple levels of access and nuances for implementing a secure but still usable approach for each level Always use a defense in depth approach and reduce human access to data and mechanisms for transforming deleting or copying data For example requir e users to strongly authenticate to an application and give the application rather than the users the requisite access permission to perform “action at a distance” In addition ensure that users come from a trusted network path and require access to th e decryption keys Use tools such as dashboards and automated reporting to give users information from the data rather than giving them direct access to the data Automate identification and classification: Automat ing the identification and classificatio n of data can help you implement the correct controls Using automation for this instead of direct access from a person reduce s the risk of human error and exposure You should evaluate using a tool such as Amazon Macie that uses machine learning to automatically discover classify and protect sensitive data in AWS Amazon Macie recognizes sensitive data such as personally identifiable information (PII) or intellectual property and provides you with dashboards and alerts that give visibility into how this data is being accessed or moved Resources Refer to the following resources to learn more about data classification Documentation • Data Classification Whitepaper • Tagging Your Amazon EC2 Resources ArchivedAmazon Web Services Security Pillar 29 • Amazon S3 Object Tagging Protecting Data at Rest Data at rest represents any data that you persist in non volatile storage for any duration in your workload This includes block stor age object storage databases archives IoT devices and any other storage medium on which data is persisted Protecting your data at rest reduces the risk of unauthorized access when encryption and appropriate access controls are implemented Encryptio n and tokenization are two important but distinct data protection schemes Tokenization is a process that allows you to define a token to represent an otherwise sensitive piece of information (for example a token to represent a customer’s credit card numb er) A token must be meaningless on its own and must not be derived from the data it is tokenizing –therefore a cryptographic digest is not usable as a token By carefully planning your tokenization approach you can provide additional protection for your content and you can ensure that you meet your compliance requirements For example you can reduce the compliance scope of a credit card processing system if you leverage a token instead of a credit card number Encryption is a way of transforming content in a manner that makes it unreadable without a secret key necessary to decrypt the content back into plaintext Both tokenization and encryption can be used to secure and protect information as appropriate Further masking is a techni que that allows part of a piece of data to be redacted to a point where the remaining data is not considered sensitive For example PCIDSS allows the last four digits of a card number to be retained outside the compliance scope boundary for indexing Implement secure key management: By defining an encryption approach that includes the storage rotation and access control of keys you can help provide protection for your content against unauthorized users and against unnecessary exposure to authorized use rs AWS KMS helps you manage encryption keys and integrates with many AWS services This service provides durable secure and redundant storage for your master keys You can define your key aliases as well as key level policies The policies help you define key administrators as well as key users Additionally AWS CloudHSM is a cloud based hardware security module (HSM) that enables you to easily generate and use your own encryption keys in the AWS Cloud It helps you meet corporate contractual and regulatory compliance requirements for data security by using FIPS 140 2 Level 3 validated HSMs ArchivedAmazon Web Services Security Pillar 30 Enforce encryption at rest: You should ensure that the only way to store data is by using encr yption AWS KMS integrates seamlessly with many AWS services to make it easier for you to encrypt all your data at rest For example in Amazon S3 you can set default encry ption on a bucket so that all new objects are automatically encrypted Additionally Amazon EC2 supports the enforcement of encryption by setting a default encryption option for an entire Region Enforce access control: Different controls including access (using least privilege ) backups (see Reliability whitepaper) isolation and versioning can all help protect your data at rest Access to your data should be audited using detective mechanisms covered earlier in this paper including CloudTrail and service level log such as S3 access logs You should inventory what data is publicly accessible and plan for how you can reduce the amount of d ata available over time Amazon S3 Glacier Vault Lock and S3 Object Lock are capabilities providing mandatory access control —once a vault policy is locked with the compliance option not even the root user can change it until the lock expires The mechanis m meets the Books and Records Management requirements of the SEC CFTC and FINRA For more details see this whitepaper Audit the use of encryption keys: Ensure that you understand and audit the use of encryption keys to validate that the access control mechanisms on the keys are appropriately implemented For example any AWS service using an AWS KMS key logs each use in A WS CloudTrail You can then query AWS CloudTrail by using a tool such as Amazon CloudWatch Insights to ensure that all uses of your keys are valid Use mechanisms to keep people away from data: Keep all users away from directly accessing sensitive data and systems under normal operational circumstances For example use a change management workflow to manage EC2 instances using tools instead of allowing direct access or a bastion host This can be achieved using AWS Systems Manager Automation which uses automation documents that contain steps you use to perform tasks These documents can be stored in source control be peer reviewed before running and tested thoroughly to minimize risk compared to shell access Business users could have a dashboard instead of direct access to a data store to run q ueries Where CI/CD pipelines are not used determine which controls and processes are required to adequately provide a normally disabled break glass access mechanism Automate data at rest protection: Use automated tools to validate and enforce data at rest controls continuously for example verify that there are only encrypted storage resources You can automate validation that all EBS volumes are encrypted using AWS Config Rules AWS Security Hub can also verify a number of different controls through ArchivedAmazon Web Services Security Pillar 31 automated check s against security standards Additionally your AWS Config Rules can automatically remediate noncompliant resources Resources Refer to the following resources to learn more about AWS best practices for protecting data at rest Video • How Encryption Works in AWS • Securing Your Block Storage on AWS • Achieving security goals with AWS CloudHSM • Best Practices for Implementing AWS Key Management Service • A Deep Dive into AWS Encryption Services Documentation • Protecting Amazon S3 Data Using Encryption • Amazon EBS Encryption • Encrypting Amazon RDS Resources • Protecting Data Using Encryption • How AWS services use AWS KMS • Amazon EBS Encryption • AWS Key Management Service • AWS CloudHSM • AWS KMS Cryptographic Details Whitepaper • Using Key Policies in AWS KMS • Using Bucket Policies and User Policies • AWS Crypto Tools ArchivedAmazon Web Services Security Pillar 32 Protecting Data in Transit Data in transit is any data that is sent from one system to another This includes communication between resources within your workload as well as communicati on between other services and your end users By providing the appropriate level of protection for your data in transit you protect the confidentiality and integrity of your workload’s data Implement secure key and certificate management: Store encrypti on keys and certificates securely and rotate them at appropriate time intervals with strict access control The best way to accomplish this is to use a managed service such as AWS Certificate Manage r (ACM) It lets you easily provision manage and deploy public and private Transport Layer Security (TLS) certificates for use with AWS services and your internal connected resources TLS certificates are used to secure network communications and esta blish the identity of websites over the internet as well as resources on private networks ACM integrates with AWS resources such as Elastic Load Balancers Amazon CloudFront distributions and APIs on API Gateway also handl ing automatic certificate rene wals If you use ACM to deploy a private root CA both certificates and private keys can be provided by it for use in EC2 instances containers etc Enforce encryption in transit: Enforce your defined encryption requirements based on appropriate standards and recommendation s to help you meet your organizational legal and compliance requirements AWS services provide HTTPS endpoints using TLS for communication thus providing encryption in transit when communicating with the AWS APIs Insecure protocols such as HTTP can be audited and blocked in a VPC through the use of security groups HTTP requests can also be automatically redirected to HTTPS in Amazon CloudFront or on an Application Load Balancer You have full control over your computing resources to implement encryption in transit across yo ur services Additionally you can use VPN connectivity into your VPC from an external network to facilitate encryption of traffic Third party solutions are available in the AWS Marketplace if you have special requirements Authenticate network communica tions: Using network protocols that support authentication allows for trust to be established between the parties This adds to the encryption used in the protocol to reduce the risk of communications being altered or intercepted Common protocols that imp lement authentication include Transport Layer Security (TLS) which is used in many AWS services and IPsec which is used in AWS Virtual Private Network (AWS VPN) ArchivedAmazon Web Services Security Pillar 33 Automate detection of unintended data access: Use tools such as Amazon GuardDuty to automatically detect attempts to move data outside of defined boundaries based on data classification level for example to detect a trojan that is copying data to an unknown or untrusted network using the DNS protocol In addition to Amazon GuardDuty Amazon VPC Flow Logs which capture network traffic information can be used with Amazon EventBridge to trigger detection of abnormal connecti ons–both successful and denied S3 Access Analyzer can help assess what data is accessible to who in your S3 buckets Resources Refer to the follow ing resources to learn more about AWS best practices for protecting data in transit Video • How can I add certificates for websites to the ELB using AWS Certificate Manager • Deep Dive on AWS Certificate Manager Private CA Documentation • AWS Certificate Manager • HTTPS Listeners for Your Application Load Balancer • AWS VPN • API Gateway Edge Optimized ArchivedAmazon Web Services Security Pillar 34 Incident Response Even with extremely mature preventive and detective controls your organization should still implement mechanisms to respond to and mitigate the potential impact of security incidents Your preparation strongly affects the ability of your teams to operate effectively during an incident to isolate and contain issues and to restore operations to a known good state Putting in place the tools and access ahead of a security incident then routinely practicing incident response through game days helps ensure that you can recover while minimizing business disruption Design Goals of Cloud Response Although the general processes and mechanisms of incident response such as those defined in the NIST SP 800 61 Computer Security Incident Handling Guide remain true we encourage you to evaluate these specific design goals that are relevant to responding to security incidents in a cloud environment: • Establish response objectives : Work with your stakeholders legal counsel and organizational leadership to determine the goal of responding to an incident Some common goals include containing and mitigating the issue recovering the affected resources preserving data for forensics and attribution • Document plans : Create plans to help you respond to communicate during and recover from an incident • Respond using the cloud : Implement your response patterns where the event and data occurs • Know what you have and what you need : Prese rve logs snapshots and other evidence by copying them to a centralized security cloud account Use tags metadata and mechanisms that enforce retention policies For example you might choose to use the Linux dd command or a Windows equivalent to make a complete copy of the data for investigative purposes • Use redeployment mechanisms : If a security anomaly can be attributed to a misconfiguration the remediation might be as simple as removing the variance by redeploying the resources with the proper con figuration When possible make your response mechanisms safe to execute more than once and in environments in an unknown state ArchivedAmazon Web Services Security Pillar 35 • Automate where possible : As you see issues or incidents repeat build mechanisms that programmatically triage and respond to c ommon situations Use human responses for unique new and sensitive incidents • Choose scalable solutions : Strive to match the scalability of your organization's approach to cloud computing and reduce the time between detection and response • Learn and i mprove your process : When you identify gaps in your process tools or people implement plans to fix them Simulations are safe methods to find gaps and improve processes In AWS there are a number of different approaches you can use when addressing incident response The following section describes how to use these approaches: • Educate your security operations and incident response staff about cloud technologies and how your organization intends to use them • Prepare your incident response team to detect and respond to incidents in the cloud enabl e detective capabilities and ensur e appropriate access to the necessary tools and cloud services Additionally prepare the necessary runbooks both manual and automated to ensure reliable and consistent respo nses Work with other teams to establish expected baseline operations and use that knowledge to identify deviations from those normal operations • Simulate both expected and unexpected security events within your cloud environment to understand the effect iveness of your preparation • Iterate on the outcome of your simulation to improve the scale of your response posture reduce time to value and further reduce risk Educate Automated processes enable organizations to spend more time focusing on measures to increase the security of their workloads Automated incident response also makes humans available to correlate events practice simulations devise new response procedures perform research develop new skills and test or build new tools Desp ite increased automation your team specialists and responders within a security organization still require continuous education Beyond general cloud experience you need to significantly invest in your people to be successful Your organization can ben efit by providing additional training to your staff to learn programming skills development processes (including version control systems ArchivedAmazon Web Services Security Pilla r 36 and deployment practices) and infrastructure automation The best way to learn is hands on through running incident response game days This allows for experts in your team to hone the tools and techniques while teaching others Prepare During an incident your incident response teams must have access to various tools and the workload resources involved in the incident Make sure that your teams have appropriate preprovisioned access to perform their duties before an event occurs All tools access and plans should be documented and tested before an event occurs to make sure that they can provide a timely response Identify key personnel and external resources: When you define your approach to incident response in the cloud in unison wi th other teams (such as your legal counsel leadership business stakeholders AWS Support Services and others) you must identify key personnel stakeholders and relevant contacts To reduce dependency and decrease response time make sure that your team specialist security teams and responders are educated about the services that you use and have opportunities to practice hands on We encourage you to identify external AWS security partners that can provide you with outside expertise and a different p erspective to augment your response capabilities Your trusted security partners can help you identify potential risks or threats that you might not be familiar with Develop incident management plans: Create plans to help you respond to communicate durin g and recover from an incident For example you can start at incident response plan with the most likely scenarios for your workload and organization Include how you would communicate and escalate both internally and externally Create incident response plans in the form of playbooks starting with the most likely scenarios for your workload and organization These might be events that are currently generated If you need a starting p lace you should look at AWS Trusted Advisor and Amazon GuardDuty findings Use a simple format such as markdown so it’s easily maintained but ensure that important commands or code snippets are included s o they can be executed without having to lookup other documentation Start simple and iterate Work closely with your security experts and partners to identify the tasks required to ensure that the processes are possible Define the manual descriptions of the processes you perform After this test the processes and iterate on the runbook pattern to improve the core logic of your response Determine what the exceptions are and what the alternative resolutions are for those scenarios For ArchivedAmazon Web Services Security Pillar 37 example in a deve lopment environment you might want to terminate a misconfigured Amazon EC2 instance But if the same event occurred in a production environment instead of terminating the instance you might stop the instance and verify with stakeholders that critical d ata will not be lost and that termination is acceptable Include how you would communicate and escalate both internally and externally When you are comfortable with the manual response to the process automate it to reduce the time to resolution Preprov ision access: Ensure that incident responders have the correct access pre provisioned into AWS and other relevant systems to reduce the time for investigation through to recovery Determining how to get access for the right people during an incident delays the time it takes to respond and can introduce other security weaknesses if access is shared or not properly provisioned while under pressure You must know what level of access your team members require (for example what kinds of actions they are likel y to take) and you must provision access in advance Access in the form of roles or users created specifically to respond to a security incident are often privileged in order to provide sufficient access Therefore use of these user accounts should be res tricted they should not be used for daily activities and usage alerted on Predeploy tools: Ensure that security personnel have the right tools pre deployed into AWS to reduce the time for investigation through to recovery To automate security engine ering and operations functions you can use a comprehensive set of APIs and tools from AWS You can fully automate identity management network security data protection and monitoring capabilities and deliver them using popular software development metho ds that you already have in place When you build security automation your system can monitor review and initiate a response rather than having people monitor your security position and manually react to events If your incident response teams continue to respond to alerts in the same way they risk alert fatigue Over time the team can become desensitized to alerts and can either make mistakes handling ordinary situations or miss unusual alerts Automation helps avoid alert fatigue by using functions that process the repetitive and ordinary alerts leaving humans to handle the sensitive and unique incidents You can improve manual processes by programmatically automating steps in the process After you define the remediation pattern to an event you c an decompose that pattern into actionable logic and write the code to perform that logic Responders can then execute that code to remediate the issue Over time you can automate more and more steps and ultimately automatically handle whole classes of c ommon incidents ArchivedAmazon Web Services Security Pillar 38 For tools that execute within the operating system of your EC2 instance you should evaluate using the AWS Systems Manager Run Command which enables you to remotely and securely administrate instances using an agent that you install on yo ur Amazon EC2 instance operating system It requires the AWS Systems Manager Agent (SSM Agent) which is installed by default on many Amazon Machine Images (AMIs) Be aware though that once an instance has been compromised no responses from tools or age nts running on it should be considered trustworthy Prepare forensic capabilities: Identify and prepare forensic investigation capabilities that are suitable including external specialists tools and automation Some of your incident response activities might include analyzing disk images file systems RAM dumps or other artifacts that are involved in an incident Build a customized forensic workstation that they can use to mount copies of any affected data volumes As forensic investigation techniques require specialist training you might need to engage external specialists Simulate Run game days: Game days also known as simulations or exercises are internal events that provide a structured opportunity to practice your incident management plans and procedures during a realistic scenario Game days are fundamentally about being prepared and iteratively improving your response capabilities Some of the reasons you might find value in performing game day activities include: • Validating readiness • Developing confidence – learning from simulations and training staff • Following compliance or contractual obligations • Generating artifacts for accreditation • Being agile – incremental improvement • Becoming faster and improving tools • Refining communication and escalation • Developing comfort with the rare and the unexpected For these reasons the value derived from participating in a SIRS activity increases an organization's effectiveness during stressful events Developing a SIRS act ivity that is both realistic and beneficial can be a difficult exercise Although testing your procedures or automation that handles well understood events has certain advantages it is just as ArchivedAmazon Web Services Security Pillar 39 valuable to participate in creative SIRS activities to test yo urself against the unexpected and continuously improve Iterate Automate containment and recovery capability: Automate containment and recovery of an incident to reduce response times and organizational impact Once you create and practice the processes an d tools from your playbooks you can deconstruct the logic into a code based solution which can be used as a tool by many responders to automate the response and remove variance or guess work by your responders This can speed up the lifecycle of a respon se The next goal is to enable this code to be fully automated by being invoked by the alerts or events themselves rather than by a human responder to create an event driven response With an event driven response system a detective mechanism triggers a responsive mechanism to automatically remediate the event You can use event driven response capabilities to reduce the time tovalue between detective mechanisms and responsive mechanisms To create this event driven architecture you can use AWS Lambda which is a serverless compute service that runs your code in response to events and automatically manages the underlying compute resources for you For example assume that you have an AWS account with the AWS CloudTrail service enabled If AWS CloudTrail is ever disabled (through the cloudtrail:StopLogging API call) you can use Amazon EventBridge to monitor for the specific cloudtrail:StopLogging event and invoke an AWS Lambda function to call cloudtrail:StartLogging to restart logging Resources Refer to the following resources to learn more about current AWS best practices for incident response Videos • Prepare for & respond to security incidents in your AWS environment • Automating Incident Response and Forensics • DIY guide to runbooks incident reports and incident response ArchivedAmazon Web Services Security Pillar 40 Documentation • AWS Incident Response Guide • AWS Step Functions • Amazon EventBridge • CloudEndure Disaster Recovery Hands on • Lab: Incident Response with AWS Console and CLI • Lab: Incident Response Playbook with Jupyter AWS IAM • Blog: Orchestrating a security incident response with AWS Step Functions Conclusion Security is an ongoing effort When incidents occur they should be treated as opportunities to improve the security of the architecture Having strong identity controls automating responses to security events protecting infrast ructure at multiple levels and managing well classified data with encryption provides defense in depth that every organization should implement This effort is easier thanks to the programmatic functions and AWS features and services discussed in this pap er AWS strives to help you build and operate architectures that protect information systems and assets while delivering business value Contributors The following individuals and organizations contributed to this document: • Ben Potter Principal Security Lead Well Architected Amazon Web Services • Bill Shinn Senior Principal Office of the CISO Amazon Web Services • Brigid Johnson Senior Software Development Manager AWS Identity Amazon Web Services • Byron Pogson Senior Solution Architect Amazon Web Services • Darran Boyd Principal Security Solutions Architect Financial Services Amazon Web Services ArchivedAmazon Web Services Security Pillar 41 • Dave Walker Principal Specialist Solutions Architect Security and Compliance Amazon Web Services • Paul Hawkins Senior Security Strategist Amazon Web Services • Sam Elmalak Senior Technology Leader Amazon Web Services Further Reading For additional help please consult the following sources: • AWS Well Architected Framework Whitepaper Document Revisions Date Description July 2020 Updated guidance on account identity and permissions management April 2020 Updated to expand advice in every area new best practices services and features July 2018 Updates to reflect new AWS services and features and updated references May 2017 Updated System Security Configuration and Maintenance section to reflect new AWS services and features November 2016 First publication
General
Establishing_Enterprise_Architecture_on_AWS
ArchivedEstablishing Enterprise Architecture on AWS March 2018 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2020 Amazon Web Services Inc or its affiliates All rights reserved Archived Contents Abstract 4 Introduction 1 Enterprise Architecture Tenets 2 Enterprise Architecture Domains 4 AWS Services that Support Enterprise Architecture Activities 6 Roles and Actors 7 Application Portfolio 8 Governance and Auditability 9 Change Management 10 Enterprise Architecture Repository 10 Conclusion 11 Contributors 12 Document Revisions 12 Archived Abstract This whitepaper outlines AWS practices and services that support enterprise architecture (EA) activities It is written for IT leaders and enterprise architects in large organizations Enterprise architecture guide s organizations in the delivery of the target production landscape to realize their business vision in the cloud There are many established enterprise architectu re frameworks and methodologies In this whitepape r we will focus on the AWS services and practices that you can use to deliver common enterprise architecture artifacts and tools and provide business benefit to your organization This whitepaper uses terms and definitions that are familiar to The Open Group Architecture Framework (TOGAF ) practitioners but it is not restricted to TOGAF or any other EA framework 1 ArchivedAmazon Web Services – Establishing Enterprise Architecture on AWS Page 1 Introduction A key challenge facing many organizations is demonstrat ing the business value of their IT assets Enterprise arc hitecture aims to define the target IT landscape that realizes the business vision and drives value The k ey goals of enterprise architecture are to: • Analyze and evolve the organization’s business vision and strategy • Describe the business vision and strategy in a common ma nner (for example business capabilities functions and processes) • Provide tools frameworks and specifications to support governance in all the architectural practices • Enable trace ability across the IT landscape • Define the programs and architectures nee ded to realize the target IT state A key value proposition of a mature enterprise architecture practi ce is being able to do better “W hat if?” analysis or impact analysis B eing able to identify what application s realize what business capabilities lets you make informed decisions on delivering your organization’s business vision For example : • “What is the impact on our IT landscape if we decide to outsource a certain business service ?” • “What business capabilities and processes are impacted if we retire a certain IT system ?” • “What is the cost of realizing this aspect of our bus iness vision ?” This whitepaper will help you create endtoend traceability of IT a ssets which is one of the main goals of enterprise architecture teams Traceability audit and capture of “current state” is a perpetual challenge in a world of vendor specific hardware and legacy systems Often it is simply not possible for enterprises to catalog all of their assets In this scenario they cannot determine the business value of their IT landscape Moving to the cloud ArchivedAmazon Web Services – Establishing Enterprise Architecture on AWS Page 2 gives enterprises an opportunity to achieve traceability of their assets in the cloud Enterprise Architecture Tenets Enterprise architecture tenets are general rules and guidelines that inform and support the way in which an organization sets about fulfilling its mission They are intended to be enduring and seldom amended You should use tenets to guide your architecture design and cloud adoption Tenets can be used through the entire lifecycle of an application in your IT landscape —from conception to delivery —and to support ongoing maintenance and continuous releases Tenets are used in application design and should guide application governance and architectural reviews We highly recommend creatin g cloud based tenets to guide you in creat ing applications and workloads that will help you realize and govern your enterprise’s target landscape and business vision Examples of tenets might be: “Maximize Cost Benefit for the Enterprise” A cost centric tenet encourage s architects application teams IT stakeholders and business owners to always consider the cost effectiveness of their workloads It encourage s your enterprise to focus on projects that differentiate the business (value) not the infrastruct ure Your enterprise should examine capital expenditure and operational expenditure for each workload It will result in customer centric solutions that are most cost effective These savings benefit both your organization and your customer s “Business Con tinuity” A business continuity tenet inform s and drive s the non functional requirements for all current and future workloads in your enterprise The geographic footprint and wide range of AWS services support s the realization of this tenet The AWS Cloud i nfrastructure is built around AWS Regions and Availability Zones Each AWS Region is a separate geographic area Each Region has multiple physically separated and isolated locations know as Availability Zones Availability Zones ArchivedAmazon Web Services – Establishing Enterprise Architecture on AWS Page 3 are connected with low latency high throughput a nd highly redundant networking This tenet guide s the architecture and application teams to leverage the reliability and availability of the AWS Cloud “Agility and Flexibility” This tenet enforces the need for all applications t o be “future proof ” In a cloud computing environment new IT resources are only ever a click away which means you reduce the time it takes to make those resources available to your developers from weeks to just minutes This results in a dramatic increas e in agility for your organization since the cost and time it takes to experiment and develop is significantly lower Being f lexib le and agil e also mean that your enterprise respond s rapidly to business requirements as customer behaviors evolve The AWS Cloud enables teams to implement continuous integration and delivery practices across all development stages DevOps DevSecOps and methodologies such as Scrum become easier to set up Teams can quickly compare and evaluate architectures and practices ( eg microservices and serverless) to determine what solution best fits enterprise needs “Cloud First Strategy” Such a tenet is key to an organization that wishes to migrate to the cloud It prescribe s that new applications should be in the cloud This gove rnance prohibits the deployment of new applications on non approved infrastructure Architectural and review boards can closely examine why a workload should be granted an exception and not deployed in the cloud “All Users Services and Applications Belong in an Organizational Unit” An enterprise may use this tenet to ensure that its target landscape reflects the enterprise’s organizational structure It mandates that all cloud activities belong in an AWS organizational unit which lets your enterprise govern the business ArchivedAmazon Web Services – Establishing Enterprise Architecture on AWS Page 4 vision globally but give s autonomy when necessary to various local business units “Security First” This tenet describe s the security values of the organization For example “ Data is secured in transit and rest” or “ All infrastructure should be described as code” or “ All workloads are approved by the security organization” etc Using this tenet your architecture team can determine what level of trust they have in the cloud Enterprises vary from zero trust to total tru st In a zero trust scenario the enterprise would control all encryption keys for example They would decide to use customer managed keys with AWS Key Management Service 2 They would manage key rotation themselves and store the keys in their own hardware security module (HSM) In a total trust scenario the enterprise would choose to allow AWS to manage the encryption keys and key rotation They would also choose to use AWS CloudHSM 3 AWS can support your enterprise in both zero trust and total trust scenarios The secu rity tenet guides you in decid ing where your enterprise is at on that scale Tenets should be used to guide architectural design and decisions that drive the target landscape in the cloud They provide a firm foundation for making architecture and planning decisions for framing policies procedures and standards and for supporting resolution of contradictory situations Tenets should also be heavily leveraged during the architectural review phases of applications and workloads before they go live to ens ure the correct target landscape is being realized Enterprise Architecture Domains Enterprise architecture guides your organization’s business information process and technology decisions to enable it to execute its business strategy and meet customer needs There are typic ally four architecture domains : ArchivedAmazon Web Services – Establishing Enterprise Architecture on AWS Page 5 • Business architecture domain – describes how the enterprise is organizationally structured and what functional capabilities are necessary to deliver the business vision Business architecture addresses the questions WHAT and WHO: WHAT is the organization’s business vision strategy and objectiv es that guide creation of business services or capabilities? WHO is executing defined business services or capabilities? • Application architecture domain – describes the individual applications their interactions and their relationships to the core busine ss processes of the organization Application architecture addresses the question HOW : HOW are previously defined business ser vices or capabilities implemented? • Data architecture domain – describes the structure of an organization's logical and physical data assets and data management resources Knowledge about your customers from data analytics lets you improve and continuously evolve business processes • Technology architecture domain – describes the software and hardware needed to implement the business data and application services Each of these domains have well known arti facts diagrams and practices Enterprise architects focus on each domain and how they relate to one another to deliver an organization's strategy In addition enterprise architecture tries to answer WHERE and WHY as well: • WHERE are assets located? • WHY is something being changed? Figure 1 shows how these domains fit together: ArchivedAmazon Web Services – Establishing Enterprise Architecture on AWS Page 6 Figure 1: The four domains of an enterprise architecture AWS Services that Support Enterprise Architecture Activities Several AWS services can support your enterprise architecture activities : • AWS Organizations • AWS Identity & Access Management (IAM) • AWS Service Catalog • AWS CloudTrail • Amazon CloudWatch • AWS Config • AWS Tagging and Resource Grouping Figure 2 shows how these services support your enterprise architecture: ArchivedAmazon Web Services – Establishing Enterprise Architecture on AWS Page 7 Figure 2: AWS services that support an enterprise architecture The following sections discuss many of the enterprise architecture activities and AWS services shown in Figure 2 Roles and Actors In the b usiness architecture domain there are actors and roles An actor can be a person organization or system that has a role that initiates or interacts w ith activities Actors belong to an enterprise and in combination with the role perform the business function Understanding the actors in your organization enables you to create a definitive listing of all participants that interact with IT including users and owners of IT systems Understanding actortorole relationships is necessary to enable organizational change management and organiz ational transformation The actors and roles of your enterprise can be modelled on two levels Typically an organization ha s a corporate directory (eg Active Directory) that reflects its actors and roles On a different level you can enforce these comp onents with AWS Identity and Access Management (IAM) 4 IAM achieves the actorrole relationship while complementing AWS Organizations In IAM an actor is known as a user An AWS account within an ArchivedAmazon Web Services – Establishing Enterprise Architecture on AWS Page 8 OU defines the users for that account and the corresponding roles that user s can adopt With IAM you can securely control access to AWS services and resources for your users You can also create and manage AWS users and groups and use permissions to allow and deny their access to AWS resources SCPs put bounds around the permissions that IAM policies can grant to entities in an account such as IAM users and roles The AWS account inherits the SCPs defined in or inherited by the OU Then within the AWS account you can write even more granular policies to define how and what the user or role can access You can apply t hese policies at the user or group level In this manner you can create very granular permissions for the actors and roles of your organization Key business relationships between OUs actors (user s) and roles can be reflected in IAM Application Portfolio Application portfolio management is an important part of the application architecture domain in an e nterprise architecture It covers managing an organization’s collection of software applications and software based services that are used to attain its business goals or objectives An agreed application portfolio allows a standard set of applications to be used in an organization You can use AWS Service Catalog to manage your enterprise’s application portfolio in the cloud 5 and centrally manage commonly deployed appli cations It helps you achieve consistent governance and meet your compliance requirements AWS Service Catalog ensures compliance with corporate standards by providing a single location where organizations can centrally manage catalogs of their application s With AWS Service Catalog you can control which applications and versions are available the configuration of the available services and permission access by an individual group department or cost center AWS Service Catalog lets you : • Define your ow n application catalog End users of your organization can quickly discover and deploy applications using a self service portal ArchivedAmazon Web Services – Establishing Enterprise Architecture on AWS Page 9 • Centrally manage lifecycle of applications You can add new application versions as necessary as well as control the use of applications by specifying constraints such as the AWS Region in which a product can be launched • Grant a ccess at a granular level – You can g rant a user access to a portfolio to let that user browse and launch the products • Constrain how your AWS resources are deployed You can restrict the ways that specific AWS resources can be deployed for a product You can use constraints to apply limits to products for governance or cost control For example you can let your marketing users create c ampaign websites but restrict their access to provision the underlying databases Governance and Auditability AWS CloudTrail is a service that enables governance compliance operational auditing and risk auditing of your AWS account 6 With CloudTrail yo u can log every API call made This enables compliance with governance bodies internal and external to your organization CloudTrail gives your organization transparency across its entire AWS landscape CloudTrail provides event history of your AWS account activity including actions taken through the AWS Management Console AWS SDKs command line tools and other AWS services This event history simplifies security analysis resource change tracking and troubleshooting Amazon CloudWatch is a monitoring service for AWS Cloud resources and the applications you run on AWS 7 You can use CloudWatch to collect and track metrics collect and monitor log file s set alarms and automatically react to changes in your AWS resources CloudWatch monitors and logs the behavior of your application landscape CloudWatch can also trigger events based on th e behavior of your application While CloudTrail tracks usage of AWS CloudWatch monitors your application landscape I n combination these two services help with architecture g overnance and audit functio ns ArchivedAmazon Web Services – Establishing Enterprise Architecture on AWS Page 10 Change Management Enterprise architect s manage transition architectures Transition architectures are the increm ental releases in production that bring the current state to the target state architecture The goal of transition architecture s is to ensure that the evolving architecture continue s to deliver the target business strategy Therefore you need to manage changes to the architecture in a cohesive way AWS Config is a service that lets you assess audit and evaluate the configurations of your AWS resour ces 8 AWS Config continuously monitors and records your AWS resource configurations and lets you automate the evaluation of recorded configurations against desired configurations With AWS Config you can review changes in configurations and determine you r overall compliance against the configurations specified in your internal guidelines This enables you to simplify compliance auditing security analysis change management and operational troubleshooting Enterprise Architecture Repository An enterprise architecture repository is a collection of artifacts that describes an organization’s current and target IT landscape The goal of the enterprise architecture repository is to reflect the organi zation ’s inventory of technology data application s and bus iness artifacts and to show the relationships between these components Traditionally in a non cloud environment organi zations were restricted to choose expensive offtheshelf products to meet their enterprise architecture repository needs You can avoid these expenses with AWS services AWS Tagging and Resource Groups let you organize your AWS landscape by applying tags at different lev els of granularity 9 Tags allow you to label collect and organize resources and components within services The Tag Editor lets you manage tags across services and AWS Regions 10 Using this approach you can globally manage all the application business data and technology components of you r target landscape A Resource Group is a collection of resources that share one or more tags 11 It can be used to create an enterprise architecture “view” of your IT landscape ArchivedAmazon Web Services – Establishing Enterprise Architecture on AWS Page 11 consolidating AWS resources into a per project ( that is the on going programs that realize your targe t landscape) per entity ( that is capabilities roles processes) and perdomain ( that is Business Application Data Technology) view You can use AWS Config Tagging and Resource Groups to see exactly what cloud assets your company is using at any moment These services make i t easier to detect when a rogue server or shadow application appear in your target production landscape You may wish to continue using a tradit ional repository tool perhaps due to existing licensing commitments or legacy processes In this scenario the enterprise repository can run natively on a n EC2 instance and be maintained as before 12 Conclusion The role of an enterprise architect is to enable the organization to be innovative and respond rapidly to changing customer behavior The enterprise architect holds the long term business vision of the organization and is responsible for the journey it has to take to reach this target landscape They support an organization to achieve their objectives by successfully evolving across all domains; Business Application Technology and Data This is no different when moving t o the cloud The Enterprise A rchitect role is key in successful cloud adoption Enterprise architects can use AWS services as architectural building blocks to guide the technology decisions of the organization to realize the enterprise’s business vision It has been challenging for enterprise architects to measure their goals and demonstrate their value with on premises architectures With AWS Cloud adoption enterprise architects can use AWS services to create traceability and relationships across the enterprise architecture domains allowing the architect to correctly track how their organization is changing and improving AWS lets the enterprise architect address end toend traceability operational modeling and governance It is easier to gather data o n transition architectures in the cloud as the organization moves to its target state ArchivedAmazon Web Services – Establishing Enterprise Architecture on AWS Page 12 The wide breadth of AWS services and agility means i t is also easier for architects and application teams to respond rapidly when architectural deviations are identified and changes need to take place Using AWS services you can more easily execute and realize the value of enterprise architecture practices Contributors The following individuals and organizations contributed t o this document: • Margo Cronin Solutions Architect AWS • Nemanja Kostic Solutions Architect AWS Document Revisions Date Description April 2020 Removed AWS Organizations section March 2018 First publication 1 http://wwwopengrouporg/subjectareas/enterprise/togaf 2 https://awsamazoncom/kms/ 3 https://awsamazoncom/cloudhsm/ 4 https://awsamazoncom/iam/ 5 http://docsawsamazoncom/servicecatalog/latest/adminguide/introduction html 6 https://awsamazoncom/cloudtrail/ 7 https://awsamazoncom/cloudwatch/ Notes ArchivedAmazon Web Services – Establishing Enterprise Architecture on AWS Page 13 8 http://docsawsamazoncom/config/latest/developerguide/WhatIsConfight ml 9 http://docsawsamazoncom/awsconsolehelpdocs/latest/gsg/what are resource groupshtml 10 http://docsawsamazoncom/awsconsolehelpdocs/latest/gsg/tag editorhtml 11 http://docsawsamazoncom/awsconsolehelpdocs/latest/gsg/what are resource groupshtml 12 https:// awsamazoncom/ec2/
General
Migrating_Microsoft_Azure_SQL_Databases_to_Amazon_Aurora
ArchivedMigrati ng Microsoft Azure SQL Database s to Amazon Aurora Using SQL Server Integration Service and Amazon S3 August 2017 This paper has been archived For the latest technical content see: Migrate Microsoft Azure SQL Database to Amazon AuroraArchived © 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessmen t of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitme nts conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS a nd its customers Archived Contents Abstract v Introduction 1 Why Migrate to A mazon Aurora? 1 Architecture Overview 2 Migration Costs 4 Preparing for Migration to Amazon Aurora 4 Create a VPC 4 Create a Security Group and IAM Role 5 Create an Amazon S3 Bucket 7 Launch an Amazon RDS for SQL Server DB Instance 7 Launch an Amazon Aurora DB Cluster 8 Launch an EC2 Migration Server 10 Schema Conversion 14 AWS Schema Conversion Tool Wizard 14 Mapping Rules 16 Data Migration 17 Set Up the Repository Database 17 Build an SSIS Migration Package 17 After the Migration 33 Conclusion 33 Contributors 33 Further Reading 33 Document Revisions 34 Archived Abstract As companies migrate their workloads to the cloud there are many opportunities to increase database performance reduce licensing costs and decrease administrative overhead Minimizing downtime is a common challenge during database migrations especially for multi tenant databases with multiple schemas In this whitepaper we describe how to migrate multi tenant Microsoft Azure SQL databases to Amazon Aurora using a combination of Microsoft SQL Server Integration Services (SSIS) and Amazon Simple Storage Service (Amazon S3) which can scale to thousands of database s simultaneously while keeping downtime to a minimum when switching to new databases The target a udience for this paper includes: • Database and system administrators perform ing migrations from Azure SQL Databases into Amazon Aurora where AWS managed migration tools can’t currently be used • Database developers and administrators with SSIS experience • IT managers who want to learn about migrating databases and applications to AWS ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 1 Introduction Migrations of multi tenant databases are among the most complex and time consuming tasks handled by database administrators (DBAs) Although managed migration services such as AWS Database Migration Service (AWS DMS)1 make this task easier some multi tenant database migration s require a custom approach For example a custom solution might be required in cases whe re the source database is hosted by a third party provider who limits certain functionality of the database migration engine used by AWS DMS This whitepaper focus es on the mass migration of a multi tenant Microsoft Azure SQL Databa se to Amazon Aurora Amazon Aurora is a fully managed MySQL compatible relational database engine It combines the speed and reliability of high end commercial databases with the simplicity and costeffectiveness of open source databases 2 In the scenario covered in this whitepaper multi tenancy is defined as the deployment of numerous datab ases that have the same schema3 An example of multi tenancy would be a software asaservice ( SaaS ) provider who deploys a database for each customer We discuss how to use the AWS Schema Conversion Tool (AWS SCT)4 to convert your existing SQL Serve r schema to Amazon Aurora We also show you how to build a SQL Server Integration Services (SSIS) package that you can use to automate the simultaneous migration of multiple databases5 The m ethod described in this whitepaper can also be used to migrate to other types of databases on Amazon Web Service s (AWS ) including Amazon Redshift a fully managed data warehouse 6 Why Migrate to Amazon Aurora ? Amazon Aurora is built for mission critical workloads and is highly available by default An Aurora database cluster spans multi ple Availability Zones in a n AWS Region providing out ofthebox durability and fault tolerance to your data across physical data centers An Availabi lity Zone is composed of one or more highly available data centers operated by Amazon7 Availability Zones are isolated from each other and are connected through low latency links Each segment of your database volume is replicated six times across these Availability Zones Aurora cluster volumes automatically grow as the amount of data in your database increases with no performance or availability impact —so there is no need for estimating and provisioning large amount of database storage ahead of time An Aurora cluster volume can grow to a maximum size of 64 terabytes (TB) You are only charged for the space that you use in an Aurora cluster volume Aurora's automated backup capability supports point intime recovery of your data This enabl es you to restore your database to any second during your retention period up to the last five minutes Automated backups are stored in Amazon Simple Storage ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 2 Service (Amazon S3) which is designed for 99999999999% durability Amazon Aurora backups are automatic incremental and continuous and have no impact on database performance For a complete list of Aurora features see the Amazon Aurora product page Given the rich feature set and cost effectiveness of Am azon Aurora it is increasingly viewed as the go to database for mission critical applications Architecture Overview A diagram of the architecture you can use for migrating a Microsoft Azure SQL database to Amazon Aurora is shown in Figure 1 Figure 1 : Diagram of resources use d in a migration solution The architecture components are explained in more detail as follows Amazon EC2 Migration Server : The migration server is an Amazon Elastic Compute Cloud (EC2) instance that runs all database migration tasks including: • Installing necessary applications • Downloading and restoring the source database for schema conversion purposes • Converting the schema between source and destination databases using AWS SCT ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 3 • Developing and testing the SSIS data migr ation package With a large EC2 instance type your migration server can run thousands of migration tasks simultaneously If your database s are read and write you can choose between two migration approaches : 1 You can disconnect all clients and put your database s into the single connection mode In this scenario the database s won’t be accessible until the migration is finished Database downtime is measure d in migration time The quicker you migrate your databases the shorter the downtime 2 You can keep your database open for write connection In this scenario you will have to adjust the update record after migration If your databases are read only you can keep the connection to them during the migration process with out any impact on the migration process itself Amazon RDS for SQL Server DB Instance : Connection strings to the Azure SQL database and Amazon Aurora database need to be stored in a small repository database For this purpose you ’ll use an Amazon RDS for SQL Server database ( DB) instance Amazon Relational Database Service (Amazon RDS) is a cloud service that makes it easier to set up operate and scale a relational database in the cloud8 It provides cost efficient resizable capacity for an industry standard relational database and manages common database administration tasks Note that the repository database is a temporary resource needed only during the migration It can be terminated after the migratio n Amazon Aurora DB Cluster : An Amazon Aurora DB cluster is made up of instances that are compatible with MySQL and a cluster volume that represents data copied across three Availability Zones as a single virtual volume There are two types of instances i n a DB cluster: a primary instance (that is your destination database) and Aurora Replicas The primary instance performs all of the data modifications to the DB cluster and also supports read workloads Each DB cluster has one primary instance An Auror a Replica supports only read workloads Each DB instance can have up to 15 Aurora Replicas You can connect to any instance in the DB cluster using an endpoint address Amazon S3 Bucket : Multiple batches of your data are loaded in parallel instead of record by record into temporary storage in an S3 bucket which improve s the performance of migration9 After sav ing your data to an S3 bucket in the last step of building an SSIS package (see the Migrate Multiple Azure SQL Databases section ) you’ll execute an Amazon Aurora SQL command to import data from the S3 bucket to the database ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 4 Note : You will need to create an Amazon S3 bucket in the same AWS Region where you l aunched the Amazon Auro ra DB c luster Amazon VPC: All migration resources are created inside a virtual private cloud (VPC) Amazon VPC lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define10 You have complete control over your virtual networking environment including selection of your own IP address rang e creation of subnets and configuration of route tables and network gateways The topology of the VPC is as follows: • Two private subnets to launch the Amazon RDS DB instance Each subnet must reside entirely within one Availability Zone and cannot span z ones11 • At least two public subnets to launch your migration server and Amazon Aurora DB cluster Each subnet must be in a different Availability Zone Migration Costs These factors have an impact on the migration cost: • Size of the migrated database (S3 st orage) • Size of the Amazon RDS instance • Size of the Amazon Aurora cluster • Size of the migration server Here are a few suggestions to reduce the migration cost: • Use Amazon S3 Reduce Redundancy Storage (RRS) • For the repository database use Amazon RDS SQL Server Express Edition dbt2micro instance • For the migration server start with t2medium instance type and scale up if necessary Preparing for Migration to Amazon Aurora This section describes how to set up and configur e your AWS env ironment to prepare for migrating your Azure SQL database to Amazon Aurora AWS CloudFormation scripts are also provided to help you automate deployment of your AWS resources12 Note : You must complete t hese steps before moving on to the s chema conversion and migration tasks Create a VPC This section describes two ways you can create a VPC: manually or from a CloudFormation template ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 5 Create a VPC ( Manual ) For step bystep guidance on creating a VPC using the Amazon VPC wizard in the Amazon VPC console s ee the Amazon VPC Getting Started Guide 13 For step bystep guidance on creating a VPC fo r use with Amazon Aurora s ee the Amazon RDS User Guide 14 Create a VPC ( CloudFormation Template ) Alternatively y ou can use this CloudFormation template to quickly set up a VPC with two public and two private subnets including a network addres s translation ( NAT ) gateway To create a VPC using the CloudFormation temp late follow these steps: 1 In the AWS Management Console choose CloudFormation and then choose Create New Stack 2 Select Specify an Amazon S3 template URL and then paste the CloudFormation template URL: http://rh migration blogs3amazonawscom/CF VPCjson 3 Choose Next 4 Enter the Stack name eg VPC (Note the stack name as you will use it later ) 5 Modify the subnet CIDR blocks or leave the default subnet s 6 Choose Next 7 Under Options leave all the default value s and then choose Next 8 Under Review choose Create 9 Wait for the status to change to CREATE_COMPLETE Optional : To improve the performance of uploading data file s to the S3 bucket from within AWS create an S3 endpoint in your VPC For more information visit: https://awsamazoncom/blogs/aws/new vpcendpoint foramazon s3/ Create a Security Group and IAM Role Access to AWS requires credentials that AWS can use to authenticate your requests Those credentials must have permissions to access AWS resources (access control) such as an Amazon RDS database For example you can control acce ss to a database by limiting it to certain IP addresses or IP address ranges and restricting access to your corporate network only or to a web server that consumes data from your database server ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 6 Create a Security Group and IAM Role (Manual ) To migrate yo ur Azure SQL database to Amazon Aurora you need to do the following: • Create an Amazon EC2 security group to control access to an EC2 instance15 • Create an AWS Identity and Access Management (IAM) role that grants the migration server access to both database servers In addition the role grants external access to the migration server Note: When you use an external IP address you should use the IP address from which you will remotely access the migration server The following table shows examples of inbound rules that need to be created in the new EC2 security group: Resource Inbound Port Source Amazon RDS SQL Server 1433 IP of Migration Server Amazon Aurora DB Cluster 3306 IP of Migration Server Migration Server 3389 User external IP address • Create an IAM role for Amazon EC2 to allow migration server access to the S3 bucket This role has to be associate d with the EC2 migration instance during the launch 16 • Create an IAM role and associate it with an Amazon Aurora DB cluster to allow the DB c luster access to the S3 bucket17 Create a Security Group and IAM Role (CloudFormation Template ) Alternatively you can create both roles and the security group w ith all required inbound rules using a CloudFormation template 1 In the AWS Management Console choose CloudFormation and then choose Create New Stack 2 Select Specify an Amazon S3 template URL and then paste the CloudFormation template URL: http://rh migration blogs3amazonawscom/CF SGjson 3 Choose Next 4 Enter the Stack name eg SG (Note the stack name as you will use it later ) 5 Enter the Network Stack Name which is the name of the CloudFormation stack you provided earlier in this whitepaper in step 4 under Creat e a VPC (eg VPC) 6 Choose Next 7 Under Options leave all the default values and then choose Next ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 7 8 Under Review check the box : 9 Choose Create Create an Amazon S3 Bucket You can either use an existing S3 bucket or create a new one by follow ing the steps provided in Create a Bucket18in the Amazon S3 documentation Launch an Amazon RDS for SQL Server DB Instance This section explains how to launch an Amazon RDS for SQL Server DB instance Note that the Amazon RDS DB instance is a temporary resource that’s only needed during the migration It should be terminated after the migration to reduce the AWS cost Launch an Amazon RDS for SQL Server DB Instance ( Manual ) To launch a new Amazon RDS for SQL Server DB instance for your repository database follow these steps 1 In the AWS Management Console choose RDS 2 In the navigation pane choose Instances 3 Choose Launch DB Instance 4 Select Microsoft SQL Server and then select SQL Server Express 5 Set DB Instance Class to dbt2micro 6 Set Time Zone to your local time zone 7 Set DB Instance Identifier to repo 8 Set Master Username and Master Password 9 Leave all the other option s as their default values and choose Next Step 10 Select the VPC create d in the previous step If you create d a VPC using the CloudFormation template then the name of the VPC should be “Migration VPC” 11 Select the correct VPC S ecurity Group If you created a security group from the CloudFormation template then the name should be “SGDBSecurityGroup XXXXXXX ” where XXXXXX is a string that includes random letters and numbers 12 Leave all the other options as their default values and choose Launch DB Instance ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 8 Launch an Amazon RDS for SQL Server DB Instance (CloudFormation Template ) As an alternative method to manually launching an Amazon RDS for SQL DB instance you can use this CloudFormation template 1 In the AWS Management Console choose CloudFormation and then choose Create New Stack 2 Select Specify an Amazon S3 template URL and then paste the CloudFormation template URL: http://rh migration blogs3amazonawscom/CF RDSSQLjson 3 Enter the Stack name eg SQL 4 Enter the following parameters: o DBPassword and DBUser o NetworkStack Name which is the name of the CloudFormation stack you provided in step 4 under Creating a VPC (eg VPC) o SecurityGroupStack Name which is the name of the CloudFormation stack you provided earlier in this whitepaper in step 4 under Create an Amazon EC2 Security Group (eg SG) 5 Choose Next 6 Under Options leave all the default values and then choose Next 7 Choose Create 8 Wait for the status to change to CREATE_COMPLETE 9 Go to Output s and note the value of the SQLServerAddress key You will need it later Launch an Amazon Aurora DB Cluster This section descri bes two ways you can launch an Amazon Aurora DB cluster: manually or from a CloudFormation template Launch an Amazon Aurora DB Cluster ( Manual ) For step bystep guidance for launch ing and configuring an Amazon Aurora DB cluster for your destination database see the Amazon RDS User Guide 19 In our tests we migrated 10 databases simultaneously For this purpose we used the dbr32xla rge DB instance type Depend ing on how many databases you are planning ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 9 to migrate we suggest that you use the biggest DB instance type for the migration and then scale down to one that is more suitable for daily (production) workload s Read this blog to l earn more about how to scale Amazon RDS DB instance s: https://awsamazoncom/blogs/database/scaling your amazon rdsinstance vertically andhorizontally/ Read Managing an Amazon Aurora DB Cluster in the Amazon RDS User Guide to learn more about choosing the right DB instance type To reduce migration time we suggest that you launch your Amazon Aurora DB c luster in a single Availability Zone and then perform a Multi AZ deployment later if required for production workload s When Multi AZ is selec ted Amazon Aurora will create read replicas in different Availability Zones In this scenario when the primary Amazon Aurora DB instance becomes unavailable one of the existing replica s will be promote d to master status in a matter of seconds In a case where Multi AZ is disabled launch ing the new primary instance can take up to 5 minutes Finally load your data to the Aurora DB instance from the S3 bucket To allow Amazon Aurora access to the S3 bucket you need to grant the necessary permission You can do this by follow ing the steps described in the Allowing Amazon Aurora to Access Amazon S3 Resources article 20 Launch an Amazon Aurora DB Cluster ( CloudFormation Template ) As an alternative method to launching an Amazon Aurora DB cluster instead of launching manually you can use this Cloud Formation template 1 In the AWS Management Console choose CloudFormation and then choose Create New Stack 2 Select Specify an Amazon S3 template URL and then paste the CloudFormation template URL: http://rh migration blogs3amazonawscom/CF RDSAurorajson 3 Enter the Stack name eg Aurora 4 Enter the following parameters: o DBPassword and DBUser o NetworkStackName which is the name of the CloudFormation stack you provided in step 4 under Creating a VPC (eg VPC) o SecurityGroupStackName which is the name of the CloudFormation stack you provided earlier in this whitepaper in step 4 under Create an Amazon EC2 Security Group (eg SG) 5 Choose Next 6 Under Options leave all the default values and then choose Next 7 Choose Create ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 10 8 Wait for the status to change to CREATE_COMPLETE 9 Go to Output s and note the value of the AuroraClusterAddress key You will need it later 10 After you launch the cluster assign an IAM role to the cluster To do this follow steps 1 6 in this topic in the Amazon RDS documentation: Authorizing Amazon Aurora to Access Other AWS Services on Your Behalf 21 Note: The name of the role created by the CloudFormation template is RDSAccessS3 Launch an EC2 Migration Server This section describes two ways to launch an EC2 Migration Server: manually and using a CloudFormation template Launch a n EC2 Migration Server (Manual ) To launch the EC2 Migration instance please follow th e documentation 22 Choose these options when launch ing a new EC2 instance: • Amazon Machine Image (AMI) : Microsoft Windows Server 2012 R2 Base • Instance Type : t2large • VPC: select the one you create d in “Create a VPC” • IAM Role : select the EC2 role you created in “ Create a Security Group and IAM Role ” • Add Storage : add two Amazon Elastic Block Store ( EBS) volumes o The f irst volume should be large enough to store all data from the Azure SQL database o The s econd volume should be 10 GB in size Under the snapshot column depend ing on the Region where you are launching the Migration Server enter: Region Snapshot ID useast1 snap 0882e0679e0edbc9d useast2 snap 0f8e882e50e145512 uswest 1 snap 0be3d0aa0c7fd6058 uswest 2 snap 044e09795b0af042d cacentral 1 snap 034a9e106a335e83e euwest 1 snap 0c4f59af047f8c680 ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 11 Region Snapshot ID eucentral 1 snap 0b96dab9f8716b8a3 euwest 2 snap 0da47a13ca2333917 apsoutheast 1 snap 09e64c82ad0252691 apsoutheast 2 snap 0116831d4532fa8f0 apnortheast 1 snap 06efa146310714fda apnortheast 2 snap 0dc5415e1c5c58021 apsouth 1 snap 063223b238340215d saeast1 snap 002492e97e9a54b8b o The second volume will contain all the software necessary to accomplish the migration tasks • Security Group : select the security group you created in “ Create a Security Group and IAM Role ” Launch a n EC2 Migration Server (CloudFormation Template ) As an alternative method to launch ing an EC2 Migration Server instead of creating all resources manually you can use this CloudFormation template Server Configuration After launch ing the server either manually or from a CloudFormation template follow these steps 1 Retrieve your Windows Administrator user password The steps for doing this can be found in the article How do I retrieve my Windows administrator password after launching an instance?23 on the AWS Premium Support Center 2 Log in to the Migration Server using the RDP client If you used the CloudFormation template you can get the IP address of the Migration Server from the Output tab under IPAddress key 3 Afte r log ging in open File Explorer and check whether you see the DBTools volume If you see the DBTools volume go to step 5 ; otherwise follow step 4 4 If you do not see DBTools follow these steps: a Run the diskmgmtmsc command to open Disk Management b Under the Disk Management window scroll down until you find a disk that is offline c Right click on the disk and from the context menu se lect Online (as shown in the following screen shot ) ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 12 5 Open the command line and from the DBTools volume run Installbat This will install all the necessary applications All applications to be installed (including the link to download) are listed in Appl ication List as shown in the next screen shot Wait until all the applications are installed This might take up to 30 minutes 6 Open CreateRepositoryDBbat in Notepad and edit the following values: o serverName – This is the address of the SQL Server that you set under “Launch an Amazon RDS for SQL Server DB Instance” If you used a CloudFormation template to launch Amazon RDS you can find this value on the CloudFormation > Output tab under SQLServerAddress key o userName – This is the SQL username o userPass – This is the SQL user password 7 Save the file and execute it This script will create a repository database including the table and stored procedure on Amazon RDS for SQL Server DB instance that was created in the previous section Note: The external IP address associate d with Migration Server has to be added to Azure SQL database firewall Applications List Here is a list of the applications install ed on the Migration Server by the script described in Step 5 in the previous procedure : ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 13 • SQL Server – https://wwwmicrosoftcom/en sa/sql server/sql server downloads with minimum selected services • SQL Server Management Studio – https://docsmicrosoftcom/en us/sql/ssms/download sqlserver management studio ssms • SQL Server Data Tools – https://docsmicrosoftcom/en us/sql/ssdt/download sqlserver data tools ssdt • AWS CLI (64bit) – https://awsamazoncom/cli/ • MySQL ODB C Driver (32 bit) – https://devmysqlcom/downloads/connector/odbc/ • Azure PowerShell – https://azuremicrosoftcom/en us/downloads/ • AWS Schema Conversion Tool – http://docsawsamazoncom/SchemaConversionTool/latest/userguide/CHAP_ SchemaConversionToolInstallingh tml • Microsoft JDBC Driver 60 for SQL Server – https://wwwmicrosoftcom/en us/download/detailsaspx?displaylang=en&id=11774 • MySQL JDBC Driver – https://wwwmysqlcom/products/connector/ • Optional: MySQL Workbench – https://devmysqlcom/downloads/workbench/ ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 14 Schema Conversion Before running the AWS Schema Conversion Tool the Azure SQL database schema needs to be restored on the Migration Server This can be done either by recreating the database from a script/backup or by restoring it from a BACPAC file For information on how to export an Azure SQL database to a BACPAC file see this article on the Microsoft Azure website24 Alternatively you can execute a PowerShell script to export the Azu re SQL database to a BACPAC file as follows : 1 Use Remote Desktop Protocol ( RDP) to connect to the Migration Server 2 Locate the AzureExportps1 PowerShell script on the DBTools volume and open it in Notepad for editing 3 Modif y the values at the top of the sc ript When you are done save the changes you made 4 Open PowerShell and execute the script by entering e:\ AzureExportps1 5 When the script has executed you should see the xxxxbacpacfile in your local folder 6 To restore the database from bacpac file open the SQL Server Management Studio connect to the Migration Server (wh ich is the local server) right click on the database name and from the menu select Import Data tier Application Then follow the wizard For more information on how to import a PACPAC file to create a new user database see: https://docsmicroso ftcom/en us/sql/relational databases/data tier applications/import abacpac filetocreate anew user database AWS Schema Conversion Tool Wizard Before migrating the SQL Server database to Amazon Aurora you have to convert the existing SQL schema to the new format supported by Amazon Aurora The AWS Schema Conversion Tool helps convert the source database schema and a majority of the custom code to a format that is compatible with the target database This is a desktop application that we installed on the desktop of the Migration Server The custom code includes views stored procedures and functions Any code that the tool cannot automatically convert is clearly marked so that you can convert it yourself To start with AWS SCT follow these steps: 1 After restoring the database open the AWS Schema Conversion Tool 2 Close the AWS SCT Wizard if it opens automatically 3 From Settings select Global Settings ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 15 4 Under Drivers select the path s to the Microsoft Sql Server and MySql drivers You can find both drivers on the DBTools volume in following locations: SQL Server : E:\Drivers \Microsoft JDBC Driver 60 for SQL Server \sqljdbc_60 \enu\jre7\sqljdbc41jar MySQL : E:\Drivers \mysql connector java5141 \ mysql connector java5141 bin 5 Choose OK 6 From File select New Project Wizard 7 In Step 1: Select Source for Source Database Engine select Microsoft SQL Server 8 Set the following c onnection parameters to the EC2 Migration SQL Server (local server): o Server name : the name of the EC2 Migration Server If you didn’t chang e it it will be something like : WIN ITKVVM7QQ08 o Server port : 1433 o User name : sa o Password : sa password – if you inst alled everything from the Installbat script the password will be Password1 9 Choose Test Connection 10 If the connection is successful choose Next Otherwise verify the connection parameters 11 In Step 2: Select Schema select the database that was restored from the bacpac file and choose Next 12 In Step 3: Run Database Migration Assessment choose Next 13 In Step 4: Select Target set the following parameters : o Target Database Engine : Amazon Aurora (MySQL compatible) o Server name : The Amazon Aurora Cluster Endpoint If you launched the Amazon Aurora DB cluster from the CloudFormation template you can find the cluster endpoint on the CloudFormation output tab under AuroraConnection va lue o Server port : 3306 o User name : The Aurora master user name o Password : The Aurora master password 14 Choose Test Connection ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 16 15 If the connection test is successful choose Finish Otherwise check the connection parameters Mapping Rules In some cases you might need to set up rules that change the data type of the columns move objects from one schema to another and change the names of objects For example if you have a set of tables in your source schema named test_TABLE_NAME you can set up a rule that changes the prefix test_ to the prefix demo_ in the target schema To add mapping rules perform the following steps : 1 From Actions menu of AWS SCT choose Convert Schema 2 The converted schema appears in the right hand side of AWS SCT The schema name will be in the following format: {SQL Server database name}_{database schema} For example tc_dbo 3 To rename the output schema from Settings choose Mapping Rules 4 Choose Add new rule to create a rule for renaming the database 5 Choose Edit rule 6 From the For list select database For Actions select rename and then type a new database name 7 Choose Add new rule to create a rule for renaming the database schema 8 From the For list select schema For Actions select rename and then type a new schema name 9 Choose Save All and close the window 10 Run Convert Schema The schema should now be updated with the new settings In this example the new schema name is TimeCard_Customer1 By right clicking on the new schema name you can eithe r save t he schema as an SQL script by selecting Save as SQL or apply it directly to the Amazon Aurora database by selecting Apply to database Depend ing on the complexity of the SQL Server schema the new schema might not be optimal or cor rectly convert all objects Note : As a rule of thumb you should always look at the new schema and make necessary adjustment s and optimization ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 17 If you have a small number of databases on Azure SQL (~10 or fewer ) you can apply the schema for each database by modif ying the rule for the schema name running Convert Schema and then apply ing it to the destination database If you are hosting hundreds or thousands of databases a more efficient way to apply the new schema would be to save it as an SQL script and then create a script using Bash (Linux) or PowerShell (Windows) to read an exported schema file modif y the schema name and save it as a new file ; then use a tool such as MySQL Workbench25 or a command line tool such as mysql to apply the script to the Amazon Aurora database You can find mysql here: C:\Program Files \MySQL \MySQL Workbench 63 CE Data Migration You ar e now ready to migrate the data First you need to set up the repository database and then you need to build an SSIS migration package Set Up the Repository Database From the Migration Server connect to the Amazon RDS repository ( MigrationCfg ) database us ing SQL Server Management Studio P opulate the ConnectionsCfg table with the following values: • MSSQLConnectionStr : The Azure SQL connection string which has the following format: DataSource= youraureserver databasewindowsnet;User ID=user_name ;Password= db_password ;Initial Catalog=TimeCard1;Provider=SQLNCLI111;Persist Security Info=True;Auto Translate=False; • MySQLConnectionStr : The Amazon Aurora connection string which has the following format: DRIVER={My SQL ODBC 53 ANSI Driver};SERVER=your_aurora_closter_endpoint;DATABASE=TimeCard_Custom er1;UID=user_name;Pwd=db_password; • StartExecution : Indicate s if the migration for the given database has already started This value should i nitially be set to 0 • Status : Upon completion of the database migration the status will either be Success or Failed depend ing on the migration outcome • StartTime and EndTime : These are the statistic s column s that show the database migration start and end times • DBName : Can be any string unique across all records This string will be used as the prefix in the file name of the file contain ing exported data Build an SSIS Migration Package To build an SSIS Migration Packa ge perform the following steps Create a New Project 1 On the D:\ drive create a new folder called Output 2 Open the SQL Server Data Tool 2015 application ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 18 3 Select File then New and then Project 4 From Templates select Integration Services and then s elect Integration Service s Project 5 Name your project 6 Choose OK 7 Under Solution Explorer right click on the project name and select Convert to Package Deployment Model 8 Rename you r package from Packagedtsx to something more meaning ful eg SQLMigrationdtsx 9 In Properties under Security change ProtectionLevel to EncryptSensitiveWithPassword 10 Choose PackagePassword and set the password ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 19 Set the SSIS Variables 1 From the SSIS menu select Variables 2 Add the following variables: Variable Name Variable Type ConfigID Int32 DBName String MSConnectionString String MyConnectionString String S3Input_LT1 String 3 For S3Input_LT1 add the following expression: LOAD DATA FROM S3 's3 useast1://yours3bucket/"+ @[User::DBName]+"_TL1txt' INTO TABLE [Your_First_Table_Name] FIELDS TERMINATED BY '' LINES TERMINATED BY ' \\n' (Col1 Col2 Col3 Col4); 4 Adjust the table name and column name s to reflect your database schema 5 Repeat the last step to create multiple S3Input_LTx variable s—one for each table For example if you have 10 tables then you should have : ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 20 S3Input_LT1 … S3Input_LT1 0 6 Modify the expression for each variable accordingly For e xample the last variable will have this expression : LOAD DATA FROM S3 's3 useast1://yours3bucket/"+ @[User::DBName]+"_ TL10txt' INTO TABLE [Your_Last_Table_Name] FIELDS TERMINATED BY '' LINES TERMINATED BY ' \\n' (Col1 Col2 Col3 Col4); Notice that in each variable expression the table name as well as file name should be different When you are done you should have following variables: Retrieve Configurations from Repository Database 1 From the SSIS Toolbox drag and drop Execute SQL Task on Control Flow 2 Double click Execute SQL Task 3 Under General change ResultSet to Single row 4 Under SQL Statement exp and the list and select New connection Set up a new connection to your Amazon RDS SQL Server repository database 5 Set SQLStatement to EXEC [sp_GetConnectionStr] ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 21 6 Under Result Set add the following four rows: ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 22 Create Data Migration Flow Follow the steps b elow to create a data flow from Azure SQL Server to Amazon Aurora To migrate multiple database tables simultaneously put all data flows inside Sequence Container by follow ing these steps: 1 From the SSIS Toolbox drag and drop Sequence Container onto the Control Flow panel 2 Select Get Connection Strings and connect the green arrow to Sequence Container Output Data to Temporary File 1 From the SSIS Toolbox drag and drop Data Flow Task into Sequence Container ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 23 2 Double click Data Flow Task 3 From the SSIS Toolbox drag and drop Source Assistance onto the new Data Flow Task panel 4 Under Source Type select SQL Server Under Connection Managers select new 5 Choose OK 6 Set up a connection to one of your Azure SQL databases 7 When done you should see OLE DB Source on the Data Flow Task panel Double click it 8 From the Name of table or the view menu select the first table that you want to migrate and c hoose OK 9 From the SSIS Toolbox expand Other Destinations and drag and drop Flat File Destination onto Data Flow panel 10 Select OLE DB Source and connect the green arrow to Flat File Destination 11 Double click on Flat File Destination Under Flat File connection manager choose New ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 24 12 Select Delimiter and choose OK 13 Under File name enter D:\Output \temptxt and choose OK 14 Choose Mapping You should see the following : 15 Choose OK The Data Flow Task panel should look like this: ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 25 16 Under Connection Manager s select the newly created connection to the Azure SQL database ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 26 17 Under Properties : a Change DelayValidation to False Choose OK a Choose Expressions Under Property select Connection String Under Expression enter : @[User::MSConnectionString] 18 Repeat steps 16 17 for Flat File Connection but set the Connection String expression to: D:\\Output \\"+@[User::DBName]+"_TL1txt ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 27 19 Change DelayValidation to False 20 Under Control Flow select Data Flow Task Under Properties change DelayValidation to True Copy Temporary Data File to Amazon S3 Bucket 1 From the SSIS Toolbox drag and drop Execute Process Task into Sequence Container 2 Select Data Flow Task and connect the green arrow to Execute Process Task The new flow should look l ike this: 3 Double click Execute Process Task and make following changes: • Under Process : o Executable : C:\Program Files \Amazon \AWSCLI \awsexe o Working Directory : C:\Program Files \Amazon \AWSCLI • Under Expressions : o Property : Arguments ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 28 o Expression : "s3 cp D:\\Output \\"+ @[User::DBName]+"_TL1txt s3:// your s3bucket " 4 Choose OK 5 Select Execute Process Task Under Properties change DelayValidation to False Import Data from Temporary File to Amazon Aurora 1 From the SSIS Toolbox drag and drop Execute SQL Task into Sequence Container 2 Select Execute Process Task and connect the green arrow to Execute SQL Task The new flow should look like this: 3 Double click Execute SQL Task ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 29 4 Change ConnectionType to ADONET 5 Under Connection select New connection Choose New 6 Under Provider select Net Providers Odbc Data Provider 7 Check Use connection string and enter the following connection string: Driver={MySQL ODBC 53 ANSI Driver};server= aurora_endpoint ;database=TimeCard_ Customer 1 ;UID=aurora_us er;Pwd=aurora_password ; 8 Under General s et SQLSourceType to Variable and set SourceVariable to User:S3Input_LT1 Choose OK 9 Under Connection Managers select your Aurora connection ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 30 10 Under Properties change DelayValidation to True 11 Choose Expressions Under Property select Connection String Under Expression enter : @[User::MyConnectionString] For each table that you want to migrate r epeat all steps define d in the following sections : Output Data to Tem porary File Copy Temporary Data File to Amazon S3 Bucket Import Data from Temporary File to Amazon Aurora Reuse connection managers for Azure SQL and Amazon Aurora cluster The Flat File connection needs to be set up for each table separately In addition for each table : • Change the Connection String expression as follow s: o For the second table: D:\\Output \\"+@[User::DBName]+"_TL2 txt o For the third table: D:\\Outpu t\\"+@[User::DBName]+"_TL3 txt o and so on • Under Expression change the file name as follow s: o s3 cp D: \\Output \\"+ @[User::DBName]+"_ TL2txt s3:// your s3bucket o s3 cp D: \\Output \\"+ @[User:: DBName]+"_ TL3txt s3:// your s3bucket o and so on • Change SourceVariable as follow s: o For the second table : to S3Input_LT2 o For the third table : to S3Input_LT3 o and so on Tracking Migration Status The database migration completion status either success or failed is store d in the repository database To track the status follow these steps: 1 Drag and drop Execute SQL Task below Sequence Container 2 Select Sequence Container and connect the green arrow to Execute SQL Task 3 Double click Exec ute SQL Task 4 Under Connection select the connection to your Amazon RDS SQL Server Express repository database 5 Under SQLStatement enter: UPDATE [ConnectionsCfg] SET [Status] = 'Success' EndTime = GETDATE() WHERE [CfgID] = ? ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 31 6 Under Parameter Mapping add a new record with the following variable name : 7 Choose OK 8 Repeat step s 16 Modify the SQL Statement as follows : UPDATE [Connect ionsCfg] SET [Status] = 'Failed ' EndTime = GETDATE() WHERE [CfgID] = ? 9 Select the green arrow connecting Sequence Container with Execute SQL Task 10 Under Properties change Value to Failure The final flow should look like this: ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 32 11 Save and build the package You can test the package by executing it directly from Visual Studi o Migrate Multiple Azure SQL Databases Packages will migrate a single database To migrate multiple databases simultaneously create a Windows batch file that will call the SSIS package You can use the following command to call the SSIS package: cd C:\Program Files \Microsoft SQL Server \130\DTS\Binn dtexec /F "C: \SSIS\SQLMigrationdtsx" /De your_package_password Now you can execute the batch file simultaneously as many times and for as many databases as you set up in the Repository database In case of hundreds or thousands of databases the migration process should be split across multiple EC2 instances Here is one approach for setting up multiple instance s: 1 Determin e the optimal number of databases that can be migrated by a single EC2 instance (Migration Server) For instance you can start test migrating 20 databases using a single instance By monitoring the CPU and memory usage of the Migration Server you can either in crease or decrease the count of databases You could also change to a larger EC2 instance type 2 In Windows startup set up execution of multiple migration scripts – up to maximum determined in the previous step 3 Create an AMI of the instance 26 ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 33 4 Create an Auto Scaling group based on the AMI with the total EC2 instances required to migrate all databases 27 Note : You can find an example of an SSIS package on the Migration Server on the DBTools volume in /Apps/ SQLMigration S3dtsx or you can download it from http://rh migration blogs3amazonawscom/SQL Migration S3dtsx After the Migration When your databases are running on Amazon Aurora here are a fe w suggestions for next steps: • Review the best practices for Amazon Aurora • Review and optimize indexes and queries • Monitor your Amazon Aurora DB cluster • Consider Amazon Aurora with PostgreSQL as an alternative option to Amazon Aurora with MySQL Conclusion This whitepaper described one method for migrating multi tenant Microsoft Azure SQL databases to Amazon Aurora Other methods exist We tested our solution a few times using the following configurations : • Source databases o 10 databases each with 10 tables o Each table had 500K records o Size of a single database was ~450 MB • Destination database o Single Amazon Aurora Cluster running on a dbr38xlarge instance class o 10 packages were executed simultaneously on an EC2 m44xlarge instance type • Total migration time of all 10 databases : ~3 minutes We found that across the tests that we did all of the results were consisten t Contributors The following individuals and organizations contributed to this document: • Remek Hetman Senior Cloud Infrastructure Architect Amazon Web Services • Yoav Eilat Senior Product Mar keting Manager Amazon Web Services Further Reading For additional information see the following : • https://awsamazoncom/rds/aurora/ • https://awsamazoncom/documentation/SchemaConversionTool/ ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 34 • https://awsamazoncom/cloudformation/ • https://awsamazoncom/vpc/ Document Revisions Date Description August 2017 First publication Notes 1 https://awsamazoncom/dms/ 2 https://awsamazoncom/rds/aurora/ 3 https://msdnmicrosoftcom/en us/library/aa479086aspx 4 https://awsamazoncom/documentation/SchemaConversionTo ol/ 5 https://docsmicrosoftcom/en us/sql/integration services/ssis how tocreate anetl package 6 https://awsamazoncom/redshift/ 7 http://docsawsamazoncom/AWSEC2/latest/UserGuide/using regions availability zoneshtml 8 https://awsamazoncom/rds/ 9 https://awsamazoncom/s3 10 https://awsamazoncom/vpc/ 11 http://docsawsamazoncom/AWSEC2/latest/UserGuide/using regions availability zoneshtml 12 https://awsamazoncom/cloudformation/ 13 http://docsawsamazoncom/AmazonVPC/latest/GettingStartedGuide/getting started ipv4html 14 http://docsawsamazoncom/AmazonRDS/latest/UserGuide/AuroraCreateVPChtml 15 http://docsawsamazoncom/Am azonVPC/latest/UserGuide/VPC_SecurityGroupsht ml#CreatingSecurityGroups ArchivedAmazon Web Services – Migrating Microsoft Azure SQL Databases to Amazon Aurora Page 35 16 http://docsawsamazoncom/AWSEC2/latest/UserGuide/iam roles foramazon ec2html 17 http://docsawsamazoncom/AmazonRDS/latest/UserGuide/AuroraAuthorizingAW SServiceshtml 18 http://docsawsamazoncom/AmazonS3/latest/gsg/CreatingABuckethtml 19 http://docsawsamazoncom/AmazonRDS/latest/UserGuide/AuroraCrea teInstance html 20 http://docsawsamazoncom/AmazonRDS/latest/UserGuide/AuroraAuthorizingAW SServiceshtml 21 http://docsawsamazoncom/AmazonRDS/latest/UserGuide/AuroraAuthorizingAW SServiceshtml#AuroraAut horizingAWSServicesAddRoleToDBCluster 22 http://docsawsamazoncom/AWSEC2/latest/WindowsGuide/EC2_GetStartedhtml 23 https://awsamazoncom/premiumsupport/knowledge center/retrieve windows admin password/ 24 https://docsmicrosoft com/en us/azure/sql database/sql database export 25 https://devmysqlcom/downloads/workbench/ 26 http://docsawsamazoncom/AWSEC2/latest/WindowsGuide/Creating_EBSbacked_ WinAMIhtml 27 http://docsawsamazoncom/AWSEC2/latest/WindowsGuide/Creating_EBSb acked_ WinAMIhtml
General
Using_AWS_in_the_Context_of_NCSC_UKs_Cloud_Security_Principles
This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Using AWS in the c ontex t of NCSC UK’s Cloud Security Principles October 2016 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 2 of 47 Table of Contents Abstract 3 Scope3 Considerations for public sector organisations 3 Shared Responsibility Environment 4 Implementing Cloud Security Principles in AWS 6 Principle 1: Data in transit protection 6 Principle 2: Asset protection and resilience 8 Principle 3: Separation between consumers 19 Principle 4: Governance framework 21 Principle 5: Operational securi ty 23 Principle 6: Personnel security 29 Principle 7: Secure development 30 Principle 8: Supply chain security 31 Principle 9: Secure consumer management 32 Principle 10: Identity and authentication 36 Principle 11: External interface protection 38 Principle 12: Secure service administration 40 Principle 13: Audit information provision to consumers 42 P rinciple 14: Secure use of the service by the consumer 43 Conclusion 45 Additional Resources 45 Document Revisions 46 Appendix – AWS Platform Benefits 47 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 3 of 47 Abstract This whitepaper is intended to assist organisations using Amazon Web Services (AWS) for United Kingdom (UK) OFFICI AL classified workloads in alignment with National Cyber Security Centre ’s (NCSC) Cloud Security Principles published under the Cloud Security Guidance This document aims to help the reader understand: • How AWS implements security processes and provides assurance over those processes for each of the Cloud Security Principles • The role that the customer and AWS play in managing and securing content stored on AWS • The way AWS services operate including how customers can address security and risk management using AWS cloud services Scope This whitepaper is based around typical questions asked by AWS customers when considering the implications of handling OFFICIAL information in relation to NCSC Cloud Security Principles Our intention is to provide you with guidance that you can use to make an informed decision when performing risk assessments to help address common security requirements This whitepaper is not legal advice for your specific use of AWS; we strongly encourage you to obtain appropriate compl iance advice about your specific data privacy and security requirements as well as applicable laws relevant to your projects and datasets Considerations for public sector organisations NCSC published the Cloud Security Guidance documents for public sector organisations that are considering the use of cloud services for handling OFFICIAL information on 23 April 2014 Under this guidance HM Government information assets are currently classified into three categories: OFFICIAL SECRET and TOP SECRET Each information asset classification attracts a baseline set of security controls providing appropriate protection against typical threats NCSC C loud Security Guidance includes a risk management approach to using cloud services a summary of the Cloud Securit y Principles and guidance on implementation of the Cloud Security Principles Additionally supporting guidance documents are included on recognised standards and definitions separation requirements for cloud services and specific guidance on the measures that customers of Infrastructure as a Service (IaaS) offerings should consider taking This whitepaper provides guidance on how AWS aligns with Cloud Security Principles and the objectives of these principles as part of NCSC ’s Cloud Security Guidance The legacy Impact Level accreditation scheme has been phased out and is no longer the mechanism used to describe the security properties of a system including cloud services Public sector This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 4 of 47 organisations are ultimately responsible for risk management dec isions relating to the use of cloud services GovUK Digital Marketplace Amazon Web Services currently provide the services listed on our UK G Cloud page on the UK Government Digital Marketplac e When using AWS services customers maintain complete control over their content and are responsible for managing critical content security requirements including: • What content they choose to store on AWS • Which AWS services are used with the content • In what country that content is stored • The format and structure of that content and whether it is masked anonymised or encrypted • Who has access to that content and how those access rights are granted managed and revoked Because AWS customers retain control over their data they also retain responsibilities relating to that content as part of the AWS “shared responsibility ” model This shared responsibility model is fundamental to understanding the respective roles of the customer and AWS in the context of the Cloud Security Principles Shared Responsibility Environment Using AWS creates a shared responsibility model between customers and AWS AWS operates manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the services operate In turn customers assume responsibility for and management of the guest operating system This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 5 of 47 (including updates and security patches) other associated application software as well as the configuration of the AWS provided security group firewall Customers should carefully consider the services they choose as their responsibilities vary depending on the services they use the integration of those services into their IT environments and applicable laws and regulations It is possible to enhance security and/or meet more stringent compliance requirements by leveraging technology such as hostbased firewalls hostbased intrusion detection/ prevention and encryption AWS provides tools and information to assist customers in their efforts to account for and to validate that controls are operating effectively in their extended IT environment More information can be found on the AWS Compliance center at http://awsamazoncom/compliance This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 6 of 47 Data in Transit Protection Consumer data transiting networks should be adequately protected again st tampering (integrity) and eavesdropping (confidentiality) This should be achieved via a combination of: •Network protection (denying your attacker access to intercept data) •Encryption (denying your attacker the ability to read data) Implementation objectives Consumers should be sufficiently confident that: •Data in transit is protected between the consumer’s end user device and the service •Data in transit is protected internally within the service •Data in transit is protected between the service and other services (eg where Application Programming Interfaces (APIs) are exposed) https://wwwgovuk/government/publications/i mpleme nting thecloud security principles/implementing thecloud secu rity principles#principle1 datain transit protection Implementing Cloud Security Principles in AWS The Cloud Security Guidance published by NCSC lists 14 essential principles to consider when evaluating cloud services and why these may be important to the public sector organisation Cloud service users should decide which of the principles are important and how much (if any) assurance the users require in the implementation of these principles The 14 Cloud Security Principles their objectives and how AWS services can be used to implement these objectives are described with the related assurance approach Principle 1: Data in transit protection Implementation approach AWS uses various technologies to enable data in transit protection between the consumer and a service within each service and between the services Cloud infrastructure and applications often communicate over public links such as the Internet so it is impo rtant to protect data in transit when you run applications in the cloud This involves protecting network traffic between clients and servers and network traffic between servers Further information on enabling network security for data protection is provided in the next section AWS Network Protection The AWS network provides protection against network attacks Procedures and mechanisms are in place to appropriately restrict unauthorized internal and external access to data and access to customer data is appropriately segregated from other customers Examples in clude: Distributed Denial of Service (DDoS) Attacks: AWS API endpoints are hosted on large Internet scale infrastructure and use proprietary This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 7 of 47 DDoS mitigation techniques Additionally AWS networks are multi homed across a number of providers to achieve Internet access diversity Man in the Middle (MITM) Attacks: All of the AWS APIs are available via Secure Sockets Layer (SSL) protected endpoints which provide server authentication Amazon EC2 Amazon Machine Images (AMIs) automatically generate new Secure Shell (SSH) host keys on first boot and log them to the instance’s console Customers can then use the secure APIs to call the console and access the host keys before logging into the instance for the first time Customers can use SSL for all of their interactions with AWS Internet Protocol (IP) Spoofing: The AWS controlled hostbased firewall infrastructure will not permit an instance to send traffic with a source IP or Media Access Control ( MAC) address other than its own Port Scanning: Unauthorized port scans by Amazon EC2 customers are a violation of the AWS Acceptable Use Policy Violations of the AWS Acceptable Use Policy are taken seriously and every reported violation is investigated Customers can report suspected abuse via the contacts available on our website at: http://aws portalamazoncom/gp/aws/html forms controller/contactus/AWSAbuse When unauthorized port scanning is detected by AWS it is stopped and block ed Port scans of Amazon EC2 instances are generally ineffective because by default all inbound ports on Amazon EC2 instances are closed and are only opened by the customer Customers’ strict management of security groups can further mitigate the threat of port scans Customers can request permission to conduct scans of their cloud infrastructure as long as they are limited to the customer’s instances and do not violate the AWS Acceptable Use Policy Advance approval for these types of scans can be initiated by submitting a request via the AWS Vulnerability / Penetration Testing Request Form Customer Network Protection Virtual Private Cloud (VPC) : A VPC is an isolated portion of the AWS cloud within which customers can deploy Amazon EC2 instances into subnets that segment the VPC’s IP address range (as designated by the customer) and isolate Amazon EC2 instances in one subnet from another Amazon EC2 instances within a VPC are only accessible by a customer via an IPsec Virtual Private Network (VPN) connection that is established to the VPC IPsec VPN: an IPsec VPN connection connects a customer’s VPC to another network designated by the customer IPsec is a protocol suite for securing IP communications by authenticating and encrypting each IP packet of a data stream Amazon VPC customers can create an IPsec VPN connection to their VPC by first establishing an Internet Key Exchange (IKE) security association between their Amazon VPC VPN gateway and another network gateway using a pre shared key as the authenticator Upon establishment IKE negotiates an ephemeral key to secure future IKE messages An IKE security association cannot be established unless there is complete agreement among the parameters including SHA1 authentication and AES 128bit encryption Next using the IKE ephemeral key keys are established between the VPN gateway and customer gateway to form an IPsec security This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 8 of 47 Asset protection and resilience Consumer data and the assets storing or processing it should be protected against physical tampering loss damage or seizure https://wwwgovuk/government/publications/implem enting thecloud security principles/implementing thecloud security principles#principle 2asset protection and resilience association Traffic between gateways is encrypted and decrypted using this security association IKE automatically rotates the ephemeral keys used to encrypt traffic within the IPsec security association on a regular basis to ensure confidentiality of communications API: Amazon VPC API calls are part of the Amazon EC2 WSDL All API calls to create and delete VPCs subnets VPN gateways and IPsec VPN connections are all signed using an X509 certificate and an associated private key or the customer’s AWS Secret Access Key Without access to the customer’s Secret Access Key or X509 certificate Amazon EC2 API calls cannot be successfully made with that customer’s key pair In addition API calls can be encrypted with SSL to maintain confidentiality AWS Encryption (Data in transit) AWS supports both IPsec and SSL/TLS for protection of data in transit IPsec is a protocol that extends the IP protocol stack often in network infrastructure and allows applications on upper layers to communicate securely without modification SSL/TLS on the other hand operates at the session layer and while there are thirdparty SSL/TLS wrappers it often requires support at the application layer as well For further details on AWS service specific data in transit security please refer to the AWS Security Best Practices whitepaper Assurance approach The data in transit protection principle and related processes within AWS services are subject to audit at least annually under ISO 27001:2013 AICPA SOC 1 SOC 2 SOC 3 and PCIDSS certification programs These certifications among others are recognised by the European Union Agency for Network and Information Security (ENISA) under the Cloud Certification Schemes The controls in relation to data in transit protection are validated independently at least annually under the certification programs Based on the alternatives provided for selection within Cloud Security Principles guidance AWS uses Service Provider Assertion in respect of region specific requirements Principle 2: Asset protection and resilience Implementation approach The AWS cloud is a globally available p latform in which you can choose the geographic region in which your data is located AWS data centers are built in clusters in various global regions AWS calls This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 9 of 47 21 Physical location and legal jurisdiction The locations at which consumer data is stored processed and managed from must be identified so that organisations can understand the legal circumstances in which their data could be accessed without their consent Public sector organisations will also need to understand how data handling controls within the service are enforced relative to UK legislation Inappropriate protection of consumer data could result in legal and regulatory sanction or reputational damage Implementation objectives Consumers should understand: •What countries their data will be stored processed and managed from and how this affects their compliance with relevant legislation •Whether the legal jurisdiction(s) that the service provider operates within are acceptable to them https://wwwgovuk/government/publications/implem enting thecloud security principles/implementing thecloud security principles#principle 2asset protection and resilience these data center clusters Availability zones (AZs) As of October 2016 AWS maintains 38 AZs organized into 14 regions globally As an AWS customer you are responsible for carefully selecting the Availability Zones where your systems will reside You can choose to use one region all regions or any combination of regions using builtin features available within the AWS Management Console AWS regions and Availability Zones ensure that if you have location specific requirements or regional data privacy policies you can estab lish and maintain your private AWS environment in the appropriate location You can choose to replicate and back up content in more than one region; AWS does not move customer data outside the region(s) you configure Availability Zones are designed for fault isolation They are connected to multiple Internet Service Providers (ISPs) and different power grids They are interconnected using high speed links so applications can rely on Local Area Network (LAN) connectivity for communication between Availability Zones within the same region This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 10 of 47 On March 6 2015 the AWS data processing addendum including the Model Clauses was approved by the group of EU data protection authorities known as the Article 29 Working Party This approval means that any AWS customer who requires the Model Clauses can now rely on the AWS data processing addendum as providing sufficient contractual commitments to enable international data flows in accordance with the Directive For more detail on the approval from the Article 29 Working Party please visit the Luxembourg Data Protection Authority webpage here: http://wwwcnpdpubliclu/en/actualites/international/2015/03/AWS/indexhtml AWS complies with Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data Most countries have data access laws which purport to have extraterritorial application An example of a US law with extra territorial reach that is often mentioned in the context of cloud services is the US Patriot Act The Patriot Act is not dissimilar to laws in many other developed nations that enable governments to obtain information with respect to investigations relating to international terrorism and other foreign intelligence issues Any request for documents under the Patriot Act requires a court order demonstrating that the request complies with the law including for example that the request is related to legitimate investigations Assurance approach The legal jurisdiction subprinciple and related processes within AWS services are subject to audit at least annually under ISO 27001:2013 and AICPA SOC 1 SOC 2 SOC 3 certification programs These certifications are recognised by the European Union Agency for Network and Information Security (ENISA) under the Cloud Certification Schemes The controls in relation to legal jurisdic tion are validated independently at least annually under the certification programs Based on the alternatives provided for selection within Cloud Security Principles guidance AWS uses Service Provider Assertion in respect of region specific requirements The p hysical location subprinciple and related processes are not validated independently within AWS compliance programs Based on the alternatives provided for selection within Cloud Security Principles guidance the controls in relation to physical loca tion do not exist within the existing certification programs for them to be validated independently Our ISO 27001:2013 and ISO 9001:2008 certifications list all the locations in scope of the independent annual audits AWS uses Service Provider Asse rtion in respect of region specific requirements This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 11 of 47 22 Data centre security The locations used to provide cloud services need physical protection against unauthorised access tampering theft or reconfiguration of systems Inadequate protections may result in the disclosure alteration or loss of data Implementation objectives Consumers should be confident that the physical security measures employed by the provider are sufficient for their intended use of the service https://wwwgovuk/government/publications/i mplementing thecloud security principles/implementing thecloud security principles#principle2 asset protection and resilience 22 Data centre security Implementation approach Amazon has significant experience in securing designing constructing and operating large scale data centers This experience has been applied to the AWS platform and infrastructure AWS provides data center physical access to approved employees and contractors who have a legitimate business need for such privileges All individuals are required to present identification and are signed in Visitors are escorted by authorised staff When an employee or contractor no longer requires these privileges his or her access is promptly revoked even if he or she continues to be an employee of Amazon or AWS In addition access is automatically revoked when an employee’s record is terminated in Amazon’s HR system Cardholder access to data centers is reviewed quarterly Cardholders marked for removal have their access revoked as part of the quarterly review Physical access is controlled both at the perimeter and at building ingress points by professional security staff utilizing video surveillance intrusion detection systems and other electronic means Authorized staff utilises multi factor authentication mechanisms to access data center floors Assurance approach The data centre security subprinciple and related processes within AWS services are subject to audit at least annually under ISO 27001:2013 AICPA SOC 1 SOC 2 SOC 3 and PCIDSS certification programs These certifications are recognised by ENISA under the Cloud Certification Schemes The controls in relation to data centre security are validated independently at least annually under the certification programs This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 12 of 47 23 Data at rest protection Consumer data should be protected when stored on any type of media or storage within a service to ensure that it is not accessible by local unauthorised parties Without appropriate measures in place data may be inadvertently disclosed on discarded lost or stolen media Implementation objectives Consumers should have sufficient confidence that storage media containing their data is protected from unauthorised access https://wwwgovuk/government/publications/imple menting thecloud security principles/im plementing thecloud security principles#principle 2asset protection and resilience 23 Data at rest protection Implementation approach As AWS customers you have access to various security and data protection features that allows sufficient confidence that data at rest is protected from unauthorised access One of the widely used methods to protect data at rest in storage media is encryption Within AWS there are several options for encrypting data ranging from completely automated AWS encryption solutions (server side) to manual client side options Your decision to use a particular encryption model may be based on a variety of factors including the AWS service(s) being used your institutional policies regulatory and business complian ce requirements your technical capability specific requirements of the data use certificate and other factors There are three different models for how you and/or AWS provide the encryption method and work with the key management infrastructure (KMI) as illustrated in the diagram below Customer Managed AWS Managed Model A Customer manages the encryption method and entire KMI Model B Customer manages the encryption method; AWS provides storage component of KMI while custo mer provides management layer of KMI Model C AWS manages the encryption method and the entire KMI Encryption Method Encryption Method Encryption Method Key Storage Key Management Key Storage Key Management Key Storage Key Management This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 13 of 47 In addition to the client side and server side encryption features built into many AWS services another common way to protect keys in a KMI is to use dedicated storage and data processing devices that perform cryptographic operations using keys on the devices These devices called hardware security modules (HSMs) typically provide tamper evidence or resistance to protect keys from unauthorized use For customers who choose to use AWS encryption capabilities for controlled datasets the AWS CloudHSM service is another encryptio n option within your AWS environment giving you use of HSM s that are designed and validated to US government standards (NIST FIPS 1402) for secure key management If you want to manage the keys that control encryption of data in AWS services but don’t want to manage the required KMI resources either within or external to AWS you can leverage the AWS Key Management Service (KMS) AWS Key Management Service is a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data and it uses HSMs to protect the security of your keys AWS Key Management Service is integrated with other AWS services to help meet your regulatory and compliance needs AWS KMS and other AWS services not listed on Digital Marketplace are available through our partner network AWS KMS also allows you to implement key creation rotation and usage policies AWS KMS is designed so that access to your master keys is restricted The service is built on systems that are designed to protect your master keys with extensive hardening techniques such as never storing plaintext master keys on disk not persisting them in memory and limiting which systems can connect to the device All access to update software on the service is controlled by a multi level approval process that is audited and reviewed by an independent group within Amazon For more information about encryption options within the AWS environment see Secu ring Data at Rest with Encryption as well as the AWS CloudHSM product details page To learn more about how AWS KMS works you can read the AWS Key Management Service Whitepaper To learn more about specific data at rest protection features in Amazon S3 Amazon EBS Amazon RDS and Amazon Glacier please refer to the AWS Security Best Practices Whitepaper For the implementation approach towards physical security controls to secure data at rest please refer to the details described in Data Centre Security (Section 22) of this document Assurance approach The data at rest protection subprinciple and related processes within AWS services are subject to audit at least annually under ISO 27001:2013 AICPA SOC 1 SOC 2 SOC 3 and PCIDSS certification programs These certifications are recognised by ENISA under the Cloud Certification Schemes The controls in relation to data at rest protection are validated independently at least annually under the certification programs Based on the alternatives provided for selection within Cloud Security Principles guidance AWS uses Service Provider Assertion in respect of region specific requirements This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 14 of 47 24Data sanitisation The process of provisioning migrating and deprovisioning resources should not result in unauthorised access to consumer data Inadequate sanitisation of data could result in: •Consumer data being retained by the service provider indefinitely •Consumer data being accessible to other consumers of the service as resources are reused •Consumer data being lost or disclosed on discarded lost or stolen media Implementation objectives Consumers should be sufficiently confident that: •Their data is erased when resources are moved or re provisioned when they leave the service or when they request it to be erased •Storage media which has held consumer data is sanitised or securely destroyed at the end of its life https://wwwgovuk/government/publications/implem enting thecloud security principles/implementing thecloud security principles#principle 2asset protection and resilience 24 Data sanitisation Implementation approach Helping to protect the confidentiality integrity and availability of our customers’ systems and data is of the utmost importance to AWS as is maintaining customer trust and confidence AWS uses techniques described in industry accepted standards to ensure that data is erased when resources are moved or reprovisioned when they leave the service or when you request it to be erased AWS Data Erasure Amazon EBS volumes are presented to you as raw unformatted block devices that have been wiped prior to being made available for use Wiping occurs immediately before reuse as a mandatory process before re provisioning If you have procedures requiring that all data be wiped via a specific method such as those detailed in DoD 522022M (“National Industrial Security Program Operating Manual “) or NIST 80088 (“Guidelines for Media Sanitization”) you have the ability to do so on Amazon EBS You should conduct a specialized wipe procedure prior to deleting the volume for compliance with your established requirements Similarly when deletion is requested for Amazo n RDS database instance the database instance is marked for deletion An Amazon RDS automation sweeper deletes the instance from the Amazon RDS Storage System At this point the instance is no longer accessible to the This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 15 of 47 customer or AWS and unless the cu stomer requested a ‘delete with final snapshot copy’ the instance cannot be restored and will not be listed by any of the tools or APIs AWS Secure Destruction When a storage device has reached the end of its useful life AWS procedures include a decommis sioning process that is designed to prevent customer data from being exposed to unauthorized individuals AWS uses the techniques detailed in DoD 522022 M (“National Industrial Security Program Operating Manual “) or NIST 80088 (“Guidelines for Media Sanitization”) to destroy data as part of the decommissioning process All decommissioned magnetic storage devices are degaussed and physically destroyed in accordance with industry standard practices Assurance approach The data sanitisation subprinciple and related processes within AWS services are subject to audit at least annually under ISO 27001:2013 and PCIDSS certification programs These certifications are recognised by ENISA under the Cloud Certification Schemes The controls in relation to data sanitisation are validated independently at least annually under the certification programs Based on the alternatives provided for selection within Cloud Security Principles guidance AWS uses Service Provider Assertion in respect of region specific requirements This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 16 of 47 25Equipment disposal Once equipment used to deliver a service reaches the end of its useful life it should be disposed of in a way that does not compromise the security of the service or consumer data stored in the service Implementation objectives Consumers should be sufficiently confident that: •All equipment potentially containing consumer data credentials or configuration information for the service is identified at the end of its life (or prior to being recycled) •Any components containing sensitive data are sanitised removed or destroyed as appropriate •Accounts or credentials specific to redundant equipment are revoked to reduce their value to an attacker https://wwwgovuk/government/publications/imple menting thecloud security principles/implementing thecloud security principles#principle 2asset protection and resilience 25 Equipment disposal Implementation approach Helping to protect the confidentiality integrity and availability of our customers’ systems and data is of the utmost importance to AWS as is maintaining customer trust and confidence AWS uses techniques described in industry accepted standards to ensure that data is erased when resources are moved or re provisioned when they leave the service or when you request it to be erased When a storage device has reached the end of its useful life AWS procedures include a decommissioning process that is designed to prevent customer data from being exposed to unauthorized individuals AWS uses the techniques detailed in DoD 522022M (“National Industrial Security Program Operating Manual “) or NIST 80088 (“Guidelines for Media Sanitization”) to destroy data as part of the decommissioning process All decommissioned magnetic storage devices are degaussed and physically destroyed in accordance with industry standard practices Assurance approach The equipment protection subprinciple and related processes within AWS services are subject to audit at least annually under ISO 27001:2013 and PCIDSS certification programs These certifications are recognised by ENISA under the Cloud Certification Schemes The controls in relation to equipment protection are validated independently at least annually under the certification programs Based on the alternatives provided for selection within Cloud Security Principles guidance AWS uses Service Provider Assertion in respect of region specific requirements This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 17 of 47 26 Physical resilience and availability Services have varying levels of resilience which will affect their ability to operate normally in the event of failures incidents or attacks A service without guarantees of availability may become unavailable potentially for prolonged periods with attendant business impacts Implementation objectives Consumers should be sufficiently confident that the availability commitment of the service including their ability to recover from outages meets their business needs https://wwwgovuk/government/publications/imple menting thecloud security principles/implementing thecloud security principles#principle 2asset protection and resilience 26 Physical resilience and availability Implementation approach The AWS Resiliency program encompasses the processes and procedures by which AWS identifies responds to and recovers from a major event or incident within our environment This program aims to provide you sufficient confidence that your business needs for availability commitment of the service including the ability to recover from outages are met This program builds upon the traditional a pproach of addressing contingency management which incorporates elements of business continuity and disaster recovery plans and expands this to consider critical elements of proactive risk mitigation strategies such as engineering physically separate Avail ability Zones (AZs) and continuous infrastructure capacity planning AWS contingency plans and incident response playbooks are maintained and updated to reflect emerging continuity risks and lessons learned from past incidents Plans are tested and updated through the due course of business (at least monthly) and the AWS Resiliency plan is reviewed and approved by senior leadership annually AWS has identified critical system components required to maintain the availability of the system and recover service in the event of outage Critical system components (example: code bases) are backed up across multiple isolated locations known as Availability Zones Each Availability Zone runs on its own physically distinct independent infrastructure and is engineer ed to be highly reliable Common points of failures like generators and cooling equipment are not shared across Availability Zones Additionally Availability Zones are physically separate and designed such that even extremely uncommon disasters such as fires tornados or flooding should only affect a single Availability Zone AWS replicates critical system components across multiple Availability Zones and authoritative backups are maintained and monitored to ensure success replication AWS c ontinuously monitors service usage to project infrastructure needs to support availability commitments and requirements AWS maintains a capacity planning model to This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 18 of 47 assess infrastructure usage and demands at least monthly and usually more frequently (eg weekly) In addition the AWS capacity planning model supports the planning of future demands to acquire and implement additional resources based upon current resources and forecasted requirements Combined usage of Availability Zones and geographically distributed regions and numerous AWS services features provide customers with capabilities to design and architect resilient applications and platforms AWS customers benefit from the aforementioned resiliency features when the architectures are designed towards multiple failure scenarios Assurance approach The physical resilience and availability subprinciple and related processes are not validated independently within AWS compliance programs Based on the alternatives provided for selection within Cloud Security Principles guidance the controls in relation to physical resilience and availability do not exist within the existing certification programs for them to be validated independently AWS publishes most uptotheminute information on service availability at statusawsamazoncom AWS uses Service Provider Assertion in respect of region specific requirements This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 19 of 47 Separation between consumers Separation between different consumers of the service prevents one malicious or compromised consumer from affecting the service or data of another Some of the important characteristics which affect the strength and implementation of the separation controls are: •The service model (eg IaaS PaaS SaaS ) of the cloud service •The deployment model (eg public private or community cloud) of the cloud service •The level of assurance available in the implementation of separation controls SaaS and P aaS services built upon IaaS offerings may inherit some of the separation properties of the underlying IaaS infrastructure Implementation objectives Consumers should: •Understand the types of consumers with which they share the service or platform •Have confidence that the service provides sufficient separation of their data and service from other consumers of the service •Have confidence that their management of the service is kept separate from other consumers (covered separately as part of Principle 9) https://wwwgovuk/government/publications/implementing thecloud security principles/implementing thecloud security principles#principle3 separation between consumers Principle 3: Separation between consumers Implementation approach Helping to protect the confidential ity integrity and availability of our customers’ systems and data is of the utmost importance to AWS as is maintaining customer trust and confidence Using multiple levels of security AWS aims to provide you confidence that sufficient separation of data and management of the service exists from other consumers of the service Multiple Levels of Security Security within Amazon EC2 is provided on multiple levels: the operating system (OS) of the host platform the virtual instance OS or guest OS firewal ls and signed API calls Each of these items builds on the capabilities of the others This helps prevent data contained within Amazon EC2 from being intercepted by unauthorized systems or users and to provide Amazon EC2 instances that are as secure as possible without sacrificing flexibility of configuration Packet sniffing by other tenants: Virtual instances are designed to prevent other instances running in promiscuous mode to receive or “sniff” traffic that is intended for a different virtual instance While customers This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 20 of 47 can place interfaces into promiscuous mode the hypervisor will not deliver any traffic to them that is not addressed to them Even two virtual instances that are owned by the same customer located on the same physical host cannot lis ten to each other’s traffic While Amazon EC2 does provide protection against one customer inadvertently or maliciously attempting to view another’s data as a standard practice customers can encrypt sensitive traffic Customer instances have no access to raw disk devices but instead are presented with virtualized disks The AWS proprietary disk virtualization layer automatically erases every block of storage before making it available for use which protects one customer’s data from being unintentionally exposed to another Customers can further protect their data using traditional filesystem encryption mechanisms or in the case of Elastic Block Store (EBS) volumes enable AWS managed disk encryption Firewall Amazon EC2 provides a complete firewall solution referred to as a Security Group; this mandatory inbound firewall is configured in a default deny all mode and Amazon EC2 customers must explicitly open the ports needed to allow inbound traffic The traffic may be restricted by any combination of protocol port and source (individual IP or Classless Inter Domain Routing (CIDR) subnet or another customer defined security group) Customers launching instances in a Virtual Private Cloud (VPC) also have access to additional features such as restricting outbound traffic from an instance A VPC is an isolated portion of the AWS cloud within which customers can deploy Amazon EC2 instances into subnets that segment the VPC’s IP address range (as designated by the customer) and isolate Amazon EC2 instances in one subnet from another Amazon EC2 instances within a VPC are only accessible by a customer via an IPsec Virtual Private Network (VPN) connection that is established to the VPC Assurance approach The separation between consumer’s principle and related processes are not validated independently within AWS compliance programs Based on the alternatives provided for selection within Cloud Security Principles guidance the controls in relation to physical resilience and availability do not exist within the existing certification programs for them to be validated independently AWS uses Service Provider Assertion in respect of region specific requirements This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 21 of 47 Governance framework The service provider should have a security governance framework that coordinates and directs their overall approach to the management of the service and information within it Implementation objectives The consumer has sufficient assurance that the governance framework and processes in place for the service are appropri ate for their intended use of it https://wwwgovuk/government/publicati ons/implem enting thecloud security principles/implementing thecloud security principles#principle 4governance framework Principle 4: Governance framework Implementation approach AWS’s Compliance and Security teams have e stablished an information security framework and policies based on the Control Objectives for Information and related Technology (COBIT) framework and have effectively integrated the ISO 27001 certifiable framework based on ISO 27002 controls American Institute of Certified Public Accountants (AICPA) Trust Services Principles the PCI DSS v30 and the National Institute of Standards and Technology (NIST) Publication 80053 Rev 4 (Recommended Security Controls for Federal Information Systems) AWS maintain s the security policy provides security training to employees and performs application security reviews These reviews assess the confidentiality integrity and availability of data as well as conformance to the information security policy As part of a globally accepted governance framework AWS has achieved ISO 27001:2013 certification of our Information Security Management System (ISMS) covering AWS infrastructure data centers and many services ISO 27001/27002 is a widely adopted global security standard that sets out requirements and best practices for a systematic approach to managing company and customer information that’s based on periodic risk assessments appropriate to everchanging threat scenarios In order to achieve the certification a company must show it has a systematic and ongoing approach to managing information security risks that affect the confidentiality integrity and availability of company and customer information This certification reinforces Amazon’s commitment to providin g significant information regarding our security controls and practices AWS’s ISO 27001:2013 certification includes all AWS data centers in all regions worldwide and AWS has established a formal program to maintain the certification AWS ha s an established information security organization managed by the AWS Security team and is led by the AWS Chief Information Security Officer (CISO) AWS Security establishes and maintains formal policies and procedures to delineate the minimum standards for logical acces s on the AWS platform and infrastructure hosts The policies also identify functional responsibilities for the administration of logical access and security Where applicable AWS This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 22 of 47 Security leverages the information system framework and policies established and maintained by Amazon Corporate Information Security The aforementioned processes aim to provide you sufficient confidence that the governance framework and processes in place for the AWS services are appropriate for their intended use of it Assurance approach The governance framework principle and related processes within AWS services are subject to audit at least annually under ISO 27001:2013 AICPA SOC 1 SOC 2 SOC 3 and PCI DSS certification programs These certifications are recognised by ENISA under the Cloud Certification Schemes The controls in relation to governance framework are validated independently at least annually under the certification programs This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 23 of 47 Operational security The service provider should have processes and procedures in place to ensure the operational security of the service The service will need to be operated and managed securely in order to impede detect or prevent attacks against it https://wwwgovuk/government/publications/implem enting thecloud security principles/implementing thecloud security principles#principle 5operational security Principle 5: Operational security 51 Configuration and change management Implementation approach Software AWS applies a systematic approach to managing change so that changes to customer impacting services are reviewed tested approved and well communicated Change management (CM) processes are based on Amazon change management guideli nes and tailored to the specifics of each AWS service These processes are documented and communicated to the necessary personnel by service team management The goal of AWS’ change management process is to prevent unintended service disruptions and maintain the integrity of service to the customer Change details are documented in Amazon’s CM workflow tool or another change management or deployment tool Changes deployed into production environments are: • Reviewed: peer reviews of the technical aspects of a change • Tested: when applied will behave as expected and not adversely impact performance • Approved: to provide appropriate oversight and understanding of business impact from service owners (management) Changes are typically pushed into production in a phased deployment starting with lowest impact sites Deployments are closely monitored so impact can be evaluated Service owners have a number of configurable metrics that measure the health of the service’s upstream dependencies These metrics are closely monitored with thresholds and alarming in place (eg latency availability fatal errors CPU utilization etc) Rollback procedures are documented in the Change Management (CM) ticket or other change management tool When p ossible changes are scheduled during regular change windows Emergency changes to production systems that require deviations from standard change management procedures are associated with an incident and are logged and approved as appropriate Infrastructure This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 24 of 47 AWS internally developed configuration management software is installed when new hardware is provisioned These tools are run on all hosts to validate that they are configured and software is installed in a standard manner based on host classes and updated regularly Only approved Systems Engineers and additional parties authorized through a permissions service may log in to the central configuration management servers Emergency nonroutine and other configuration changes to existing AWS infrastructure are authorized logged tested approved and documented in accordance with industry norms for similar systems Updates to AWS infrastructure are done in such a manner that in the vast majority of cases they will not impact the customer and their service use AWS communicates with customers either via email or through the AWS Service Health Dashboard (http://statusawsamazoncom ) when service use may be adversely affected Assurance approach The configuration and change management s ubprinciple and related processes within AWS services are subject to audit at least annually under ISO 27001:2013 AICPA SOC 1 SOC 2 SOC 3 and PCIDSS certification programs These certifications are recognised by ENISA under the Cloud Certification Schemes The controls in relation to configuration and change management are validated independently at least annually under the certification programs This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 25 of 47 52 Vulnerability management Implementation approach Amazon Web Services is responsible for protecting the global infrastructure that runs all of the services offered in the AWS cloud Protecting this infrastructure is AWS’s number one priority AWS Security regularly scans all Internet facing service endpoint IP addresses for vulnerabilities (these scans do not include customer instances) AWS Security notifies the appropriate parties to remediate many identified vulnerabilities In addition external vulnerability threat assessments are performed regularly by independent security firms Findings and recommendations resulting from these assessments are categorized and delivered to AWS leadership These scans are done in a manner for the health and viability of the underlying AWS infrastructure and are not meant to replace the customer’s own vulnerability scans required to meet their specific compliance requirements Customers can request permission to conduct scans of their cloud infrastructure as long as they are limited to the customer’s instances and do not violate the AWS Acceptable Use Policy Advance approval for these types of scans can be 52 Vulnerability management Occasionally vulnerabilities will be discovered which if left unmitigated will pose an unacceptable risk to the service Robust vulnerabi lity management processes are required to identify triage and mitigate vulnerabilities Services which do not have effective vulnerability management processes will quickly become vulnerable to attack leaving them at risk of exploitation using publicly known methods and tools Implementation obj ectives Consumers should have confidence that: • Potential new threats vulnerabilities or exploitation techniques which could affect the service are assessed and corrective action is taken • Relevant sources of inform ation relating to threat vulnerability and exploitation technique information are monitored by the service provider • The severity of threats and vulnerabilities are considered within the context of the service and this information is used to prioritise implementation of mitigations • Known vulnerabilities within the service are tracked until suitable mitigations have been deployed through a suitable change management process • Service provider timescales for implementing mitigations to vulnerabilities found within the service are made available to them https://wwwgovuk/government/publications/implementing thecloud security principles/implementing thecloud security principles#principle5 operational security This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 initiated by submitting a request via the AWS Vulnerability / Penetration Testing Request Form In addition the AWS control environment is subject to regular internal and external risk assessments AWS engages with external certifying bodies and independent auditors to review and test the AWS overall control environment Assurance approach The vulnerability management sub principle and related processes within AWS services are subject to audit at least annually under ISO 27001:2013 AICPA SOC 1 SOC 2 SOC 3 and PCIDSS certification progr ams These certifications are recognised by ENISA under the Cloud Certification Schemes The controls in relation to vulnerability management are validated independently at least annually under the certification programs 53 Protective monitoring Implementation approach Systems within AWS are extensively instrumented to monitor key operational and security metrics Alarms are configured to automatically notify operations and management personnel when early warning thresholds are crossed on key metrics When a threshold is crossed the AWS incident response process is initiated The Amazon Incident Response team employs industry standard diagnostic procedures to drive resolution during business impacting events Staff operates 24x7x365 coverage to detect incidents and manage the impact to resolution AWS security monitoring tools help identify several types of denial of service (DoS) attacks including distributed flooding and software/logic attacks When DoS attacks are identified the AWS incident response process is initiated In addition to the DoS prevention tools redundant Page 26 of 47 53 Protective monitoring Effective protective monitoring allows a service provider to detect and respond to attempted and successful attacks misuse and malfunction A service which does not effectively monitor for attacks and misuse will be unlikely to detect attacks (both successful and unsuccessful) and will be unable to quickly respond to potential compromises of consumer environments and data Implementation objectives Consumers should have confidence that: • Events generated in service components required to support effective identification of suspicious activity are collected and fed into an analysis system • Effective analysis systems are in place to identify and prioritise indications of potential malicious activity https://wwwgovuk/government/publications/implement ingthecloud security principles/implementing thecloud security principles#principle 5operational security This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 27 of 47 54Incident management An incident management process allows a service provider to respond to a wide range of unexpected events that affect the delivery of the service to consumers Unless carefully preplanned incident management processes are in place poor decisions are likely to be made when incidents do occur Implementation objectives Consumers should have confidence that: •Incident management processes are in place for the service and are enacted in response to security incidents •Predefined processes are in place for responding to common types of incident and attack •A defined process and contact route exists for reporting of security incidents by consum ers and external entities •Security incidents of relevance to them will be reported to them in acceptable timescales and format https://wwwgovuk/government/publications/implemen tingthecloud security principles/implementing the cloud security principles#principle5 operational security telecommunication providers at each region as well as additional capacity protect against the possibility of DoS attacks Assurance approach The protective monitoring subprinciple and related processes within AWS services are subject to audit at least annually under ISO 27001:2013 AICPA SOC 1 SOC 2 SOC 3 and PCIDSS certification programs These certifications are recognised by ENISA under the Cloud Certifica tion Schemes The controls in relation to protective monitoring are validated independently at least annually under the certification programs 54 Incident management Implementation approach AWS has implemented a formal documented incident response policy and program The policy addresses purpose scope roles responsibilities and management commitment AWS utilizes a three phased approach to manage incidents: 1 Activation and Notification Phase: Incidents for AWS begin with the detection of an event This can come from several sources including: a Metrics and alarms AWS maintains an exceptional situational awareness capability most issues are rapidly detected from 24x7x365 monitoring and alarming of real time metrics and service dashboards The majority of incidents are detected in this manner AWS This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 28 of 47 utilizes early indicator alarms to proactively identify issues that may ultimately impact Customers b Trouble ticket entered by an AWS employee c Calls to the 24X7X365 technical support hotline If the event meets incident criteria then the relevant oncall support engineer will start an engagement utilizing AWS Event Management Tool system to start the engagement and page relevant program resolvers (eg Security team) The resolvers will perform an analysis of the incident to determine if additional resolvers should be engaged and to determine the approximate root cause 2 Recovery Phase the relevant resolvers will perform break fix to address the incident Once troubleshooting break fix and affected components are addressed the call leader will assign next steps in terms of follow up documentation and follow up actions and end the call engagement 3 Reconstitution Phase Once the relevant fix activities are complete the call leader will declar e that the recovery phase is complete Post mortem and deep root cause analysis of the incident will be assigned to the relevant team The results of the post mortem will be reviewed by relevant senior management and relevant actions such as design changes etc will be captured in a Correction of Errors (COE) document and tracked to completion In addition to the internal communication mechanisms detailed above AWS has also implemented various methods of external communication to support its customer base and community Mechanisms are in place to allow the customer support team to be notified of operational issues that impact the customer experience A "Service Health Dashboard" is available and maintained by the customer support team to alert customers to any issues that may be of broad impact Assurance ap proach The incident management subprinciple and related processes within AWS services are subject to audit at least annually under ISO 27001:2013 AICPA SOC 1 SOC 2 SOC 3 and PCIDSS certification programs These certifications are recognised by ENISA under the Cloud Certification Schemes The controls in relation to incident management are validated independently at least annually under the certification programs This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 29 of 47 Personnel security Consumers should be content with the level of security screening conducted on service provider staff with access to their information or with ability to affect their service Implementation objectives Service provider staff should be subject to personnel security screening and security education for their role Personnel within a cloud service provider with access to consumer data and systems need to be trustwort hy Service providers need to make clear how they screen and manage personnel within any privileged roles Personnel in those roles should understand their responsibilities and receive regular security training More thorough screening supported by adequa te training reduces the likelihood of accidental or malicious compromise of consumer data by service provider personnel https://wwwgovuk/government/publications/implem enting thecloud security principles/implementing thecloud security principles#principle 6personnel security Principle 6: Personnel security Implementation approach To ensure you are confident with the level of personnel checks AWS conducts criminal background checks as permitted by applicable law as part of preemployment screening practices for employees commensurate with the employee’s position and level of access to AWS facilities As part of the onboarding process all personnel supporting AWS systems and devices sign a nondisclosure agreement prior to being granted access Additionally as part of orientation personnel are required to read and accept the Acceptable Use Policy and the Amazon Code of Business Conduct and Ethics (Code of Conduct) Policy AWS maintains employee training programs to promote awareness of AWS information security requirements Every employee is provided with the Company’s Code of Business Conduct and Ethics and completes periodic Information Security training which requires an acknowledgement to complete Compliance audits are periodically performed to validate that employees understand and follow the estab lished policies Assurance approach The personnel security principle and related processes within AWS services are subject to audit at least annually under ISO 27001:2013 AICPA SOC 1 SOC 2 SOC 3 and PCIDSS certification programs These certifications are recognised by ENISA under the Cloud Certification Schemes The controls in relation to personnel security are validated independently at least annually under the certification programs Based on the alternatives provided for This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 30 of 47 Secure deve lopment Services should be designed and developed to identify and mitigate threats to their security Services which are not designed securely may be vulnerable to security issues which could compromise consumer data cause loss of service or enable other malicious activity Implementation objectives Consumers should be content with the level of security screening conducted on service provider staff with access to their information or with ability to affect their service https://wwwgovuk/government/publications/implem enting thecloud security principles/implementing thecloud security principles#principle 7secure development selection within Cloud Security Principles guidance AWS uses Service Provider Assertion in respect of region specific requirements Principle 7: Secure development Implementation approach AWS’ development process follows secure software development best practices which includ e formal design reviews by the AWS Security Team threat modeling and completion of a risk assessment Static code analysis tools are run as a part of the standard build process and all deployed software undergoes recurring penetration testing performed by carefully selected industry experts Our security risk assessment reviews begin during the design phase and the engagement lasts through launch to ongoing operations In addition refer to ISO 27001:2013 standard Annex A domain 125 for further detail s AWS has been validated and certified by an independent auditor to confirm alignment with the ISO 27001 certification standard Assurance approach The secure development principle and related processes within AWS services are subject to audit at least an nually under ISO 27001:2013 AICPA SOC 1 SOC 2 SOC 3 and PCIDSS certification programs These certifications are recognised by ENISA under the Cloud Certification Schemes The controls in relation to secure development are validated independently at least annually under the certification programs This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Usin g AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Principle 8: Supply chain security Implementation approach In alignment with ISO 27001 standards AWS hardware assets are assigned an owner and tracked and monitored by the AWS personnel with AWS proprietary inventory management tools AWS procurement and supply chain teams maintain relationships with all AWS suppliers Personnel security requirements for thirdparty providers supporting AWS systems and devices are established in a Mutual NonDisclosure Agreement between AWS’ parent organization Amazoncom and the respective third party provider The Amazon Legal Counsel and the AWS Procurement team define AWS third party provider personnel security requirements in contract agreements with the third party provider All persons working with AWS information must at a minimum meet the screening process for preemployment background checks and sign a Non Disclosure Agreement (NDA) prior to being granted access to AWS information Refer to ISO 27001 standa rds; Annex A domain 71 for additional details AWS has been validated and certified by an independent auditor to confirm alignment with the ISO 27001 certification standard Assurance approach The supply chain security principle and related processes within AWS services are subject to audit at least annually Supply chain security The service provider should ensure that its supply chain satisfactorily supports all of the security principles that the service claims to implement Cloud s ervices often rely upon third party products and services Those third parties can have an impact on the overall security of the services If this principle is not implemented then it is possible that supply chain compromise can undermine the security of the service and affect the implementation of other security principles Implementation objectives The consumer understands and accepts: • How their information is shared with or accessible by third party suppliers and their supply chains • How the service provider’s procurement processes place security requirements on third party suppliers and delivery partners • How the service provider manages security risks from third party suppliers and delivery partners • How the service provider manages the conformance of their suppliers with security requirements • How the service provider verifies that hardware and software used in the service are genuine and have not been tampered with https://wwwgovuk/government/publications/implem enting thecloud security principles/implementing thecloud security principles#principle 8supply chain security Page 31 of 47 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 under ISO 27001:2013 AICPA SOC 1 SOC 2 SOC 3 and PCIDSS certification programs These certifications are recognized by ENISA under the Cloud Certification Schemes The controls in relation to supply chain security are validated independently at least annually under the certification programs Principle 9: Secure consumer management Implementation approach AWS Identity and Access Management (IAM) provides you with controls and features to provide confidence that authenticated and authorised users have access to specified services and interfaces AWS IAM allows you to create multiple users and manage the permissions for each of these users within your AWS Account A user is an identity (within an AWS Account) with unique security credentials that can be used to access AWS Services AWS IAM eliminates the need to share passwords or keys and makes it easy to enable or disable a user’s access as appropriate AWS IAM enables you to implement security best practices such as least privileged by granting unique Page 32 of 47 Secure consumer management Consumers should be provided with the tools required to help them securely manage their services Management interfaces and procedures are a vital security barrier in preventing unauthorised people accessing and altering consumers’ resources applications and data 91 Authentication of consumers to management interfaces and within support channels In order to maintain a secure service consumers need to be secure ly authenticated before being allowed to perform management activities report faults or request changes to the service These activities may be conducted through a service management web portal or through other support channels (such as telephone or emai l) and are likely to facilitate functions such as provisioning new service elements managing user accounts and managing consumer data It is important that service providers ensure any management requests which could have a security impact are performed over secure and authenticated channels If consumers are not strongly authenticated then an attacker posing as them could perform privileged actions undermining the security of their service or data Implementation objectives The consumer: • Has sufficient confidence that only authorised individuals from the consumer organisation are able to authenticate to and access management interfaces for the service (Principle 10 should be used to assess the risks of different approaches to meet this objective) • Has sufficient confidence that only authorised individuals from the consum er organisation are able to perform actions affecting the consumer’s service through support channels https://wwwgovuk/government/publications/implementing thecloud security principles/implementing thecloud security principles#principle 9 secure consumer management This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 33 of 47 credentials to every user within your AWS Account and only granting permission to access the AWS services and resources required for the users to perform thei r jobs AWS IAM is secure by default; new users have no access to AWS until permissions are explicitly granted AWS IAM is also integrated with the AWS Marketplace so that you can control who in your organization can subscribe to the software and services offered in the Marketplace Since subscribing to certain software in the Marketplace launches an EC2 instance to run the software this is an important access control feature Using AWS IAM to control access to the AWS Marketplace also enables AWS Account owners to have finegrained control over usage and software costs AWS IAM enables you to minimize the use of your AWS Account credentials Once you create AWS IAM user accounts all interactions with AWS Services and resources should occur with AWS IAM user security credentials More information about AWS IAM is available on the AWS website: http://awsamazoncom/iam/ Delegate API Access to AWS Services Using IAM Roles AWS supports a very important and powerful use case with AWS Identity and Access Management (IAM) roles in combination with IAM users to enable cross account API access or delegate API access within an account This functionality gives better control and simplifies access management when managing services and resources across multiple AWS accounts You can enable cross account API access or delegate API access within an account or across multiple accounts without having to share longterm security credentials When you assume an IAM role you get a set of temporary security credentials that have the permissions associated with the role You use these temporary security credentials instead of your longterm security credentials in calls to AWS services Users interact with the service with the permissions granted to the IAM role assumed This reduces the potential attack surface area by having to create and manage fewer user credentials and users don’t have to remember multiple passwords Assurance approach The secure consumer management sub principle and related processes within AWS services are subje ct to audit at least annually under ISO 27001:2013 AICPA SOC 1 SOC 2 SOC 3 and PCIDSS certification programs These certifications are recognised by ENISA under the Cloud Certification Schemes The controls in relation to secure consumer management are validated independently at least annually under the certification programs Based on the alternatives provided for selection within Cloud Security Principles guidance AWS uses Service Provider Assertion in respect of region specific requirements This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 34 of 47 92Separation and access control within management interfaces Many cloud services are managed via web applications or APIs These interfaces are a key part of the service’s security If consumers are not adequately separated within management interfaces then one consumer may be able to affect the service or modify data belonging to another Implementation objectives The consumer: •Has sufficient confidence that other consumers cannot access modify or otherwise affect their service management •Can manage the risks of their own privileged access eg through ‘principle of least privilege’ providing the ability to constrain permissions given to consumer administrators •Understands how management interfaces are protected (see Princip le 11) and what functionality is available via those interfaces https://wwwgovuk/government/publications/implem enting thecloud security principles/implementing thecloud security principles#pr inciple 9secure consumer management 92 Separation and access control within management interfaces Implementation approach API calls to launch and terminate instances change firewall parameters and perform other functions are all signed by your Amazon Secret Access Key which could be either the AWS Accounts Secret Access Key or the Secret Access key of a user created with AWS IAM Without access to your Secret Access Key Amazon EC2 API calls cannot be made on your behalf In addition API calls can be encrypted with SSL to maintain confidenti ality Amazon recommends always using SSLprotected API endpoints AWS IAM also enables you to further control what APIs a user has permissions to call to manage a specific resource Cross Account Access for better identity management In AWS assuming a role is a security principle that enables the user to assign policies that grant permissions to perform actions on AWS resources Unlike with a user account you don’t sign in to a role Instead you are already signed in as a user and then you switch to the role temporarily giving up your original user permissions and assuming the permissions of the role When you are done using the role you revert to your user’s permissions again As documented in the IAM User Guide an administrator creates a role in an account with resources to be managed and then specifies the AWS account IDs that are This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 35 of 47 trusted to use the role The administrators of the trusted accounts then grant permissions to specific users who can switch to the role Delegating access through roles this way can help you improve your security posture by simplif ying the management of credentials Instead of having to provide your users with sign in credentials for every account that they need to access users only need one set of signin credentials This leads to a reduction in the potential attack surface area by having fewer user credentials that you have to create and manage and your users don’t have to remember multiple passwords This feature can be used to help improve security within a single account When you create a typical user you give that user permissions to access all of the resources needed to do the job even the most sensitive and rarely accessed resources Ideally a user shouldn’t have any access to the sensitive and critical resources until actually needed to keep to the security principle of “least access” The ability to delegate permissions to a role and allow a user to switch to the role solves this dilemma Grant the user only those permissions that allow access to the normal day today managed resources and not to the sensitive resour ces Instead gr ant to a role the permissions to access sensitive resources The user can switch to the role when needing to use those resources and then switch right back to their user account This feature helps reduce the attack surface area Assurance approach The separation and access control within management interfaces subprinciple and related processes within AWS services are subject to audit at least annually under ISO 27001:2013 AICPA SOC 1 SOC 2 SOC 3 and PCIDSS certification programs These certifications are recognised by ENISA under the Cloud Certification Schemes The controls in relation to separation and access control within management interfaces are validated independently at least annually under the certification programs Based on the alternatives provided for selection within Cloud Security Principles guidance AWS uses Service Provider Assertion in respect of region specific requirements This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 36 of 47 Identity and authenticatio n Consumer and service provider access to all service interfaces should be constrained to authenticated and authorised individuals All cloud services will have some requirement to identify and authenticate users wishing to access service interfaces Weak authentication or access control may allow unauthorised changes to a consumer’s service theft or modification of data or denial of service Implementation objectives Consumers should have sufficient confidence that identity and authentication controls ensure users are authorised to access specific interfaces https://wwwgovuk/government/publications/i mplementing thecloud secur ity principles/implementing thecloud security principles#principle 10identity and authentication Principle 10: Identity and authentication Implementation approach AWS provides a number of ways for you to identify users and securely access your AWS Account A complete list of credentials supported by AWS can be found on the Security Credentials page under ‘Your Account’ AWS also provides additional security options that enable you to further protect your AWS Account and control access: AWS Identity and Access Management (AWS IAM) key management and rotation temporary security credentials and multi factor authentication (MFA) AWS IAM enables you to minimize the use of your AWS Account credentials Once you create AWS IAM user accounts all interactions with AWS Services and resources should occur with AWS IAM user security credentials More information about AWS IAM is available on the AWS website: http://awsamazoncom/iam/ Host Operating System: Administrators with a business need to access the management plane are required to use multi factor authentication to gain access to purpose built administration hosts These administrative hosts are systems that are specifically designed built configured and hardened to protect the management plane of the cloud All such access is logged and audited When an employee no longer has a business need to access the management plane the privileges and access to these hosts and relevant systems can be revoked Guest Operating S ystem: Virtual instances are completely controlled by you the customer You have full root access or administrative control over accounts services and applications AWS does not h ave any access rights to your instances or the guest OS AWS recommends a base set of security best practices to include disabling password only access to your guests and utilizing some form of multi factor authentication to gain access to your instances (or at a minimum certificate based SSH Version 2 access) Additionally you should employ a privilege escalation mechanism with logging on a per user basis For example if the guest OS is Linux This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 37 of 47 after hardening your instance you should utilize certifi cate based SSHv2 to access the virtual instance disable remote root login use command line logging and use ‘sudo ’ for privilege escalation You should generate your own key pairs in order to guarantee that they are unique and not shared with other customers or with AWS AWS also supports the use of the Secure Shell (SSH) network protocol to enable you to log in securely to the EC2 instances Authentication for SSH used with AWS is via a public/private key pair to reduce the risk of unauthorized access to your instance You can also connect remotely to your Windows instances using Remote Desktop Protocol (RDP) by utilizing an RDP certificate generated for your instance AWS IAM enables you to implement security best practices such as least privilege by granting unique credentials to every user within your AWS Account and only granting permission to access the AWS services and resources required for the users to perform their jobs AWS IAM is secure by default; new users have no access to AWS until permissions are explicitly granted AWS IAM is also integrated with the AWS Marketplace so that you can control who in your organization can subscribe to the software and services offered in the Marketplace Since subscribing to certain software in the Marketplace launches an EC2 instance to run the software this is an important access control feature Using AWS IAM to control access to the AWS Marketplace also enables AWS Account owners to have finegrained control over usage and software costs Assurance approach The identity and authentication principle and related processes within AWS services are subject to audit at least annually under ISO 27001:2013 AICPA SOC 1 SOC 2 SOC 3 and PCIDSS certification programs These certifications are recognised by ENISA under the Cloud Certification Schemes The controls in relation to identity and authentication are validated independently at least annually under the certification programs Based on the alternatives provided for selection within Cloud Security Principles guidance AWS uses Service Provider Assertion in respect of region specific requirements This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Principle 11: External interface protection Implementation approach Helping to protect the confidentiality integrity and availability of our customers’ systems and data is of the utmost importance to AWS as is maintaining customer trust and confidence The AWS network has been architected to permit you to select the level of security and resiliency appropriate for your workload To enable you to build geographically dispersed fault tolerant web architectures with cloud resources AWS has implemented a world class network infrastructure that is carefully monitored and managed Secure Network Architecture Network devices including firewall and other boundary devices are in place to monitor and control communications at the external boundary of the network and at key internal boundaries within the network These boundary devices employ rule sets access control lists (ACL) and configurations to enforce the flow of information to specific information system services ACLs or traffic flow policies are established on each managed interface which manage and enforce the flow of traffic ACL policies are approved by Amazon Information Security These policies are automatically pushed using AWS’s ACLManage tool to help ensure these managed interfaces enforce the most uptodate ACLs Secure Access Points AWS has strategically placed a limited number of access points to the cloud to allow for a more comprehensive monitoring of inbound and outbound communications and network traffic These customer access points are called API endpoints and they allow secure HTTP access (HTTPS) which allows you to establis h a secure communication session with your storage or compute instances within AWS In addition AWS has implemented network devices External interface protection All external or less trusted interfaces of the service should be identified and have appropriate protections to defend against attacks through them If an interface is exposed to consumers or outsiders and it is not sufficiently robust then it could be subverted by attackers in order to gain access to the service or data within it If the interfaces exposed include private interfaces (such as management interfaces) then the impact may be more significant Consumers can use different models to connect to cloud services which expose their enterpris e systems to varyi ng levels of risk Implementation objectives • The consumer understands how to safely connect to the service whilst minimising risk to the consumer’s systems • The consumer understands what physical and logical interfaces their information is available from • The consumer has sufficient confidence that protections are in place to control access to their data • The consumer has sufficient confidence that the service can determine the identity of connecting users and services to an appropriate level for the data or function being accessed https://wwwgovuk/government/publicati ons/impleme nting thecloud security principles/implementing the cloudsecurity principles#principle11external interface protection This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 that are dedicated to managing interfacing communications with Internet service providers (ISPs) AWS employs a redundant connection to more than one communication service at each Internet facing edge of the AWS network These connections each have dedicated network devices Transmission Protection You can connect to an AWS access point via HTTP or HTTPS using Secure Socket s Layer (SSL) a cryptographic protocol that is designed to protect against eavesdropping tampering and message forgery For customers who require additional layers of network security AWS offers the Amazon Virtual Private Cloud (VPC) which provides a private subnet within the AWS cloud and the ability to use an IPsec Virtual Private Network (VPN) device to provide an encrypted tunnel between the Amazon VPC and your data center Network Monitoring and Protection AWS utilizes a wide variety of automate d monitoring systems to provide a high level of service performance and availability AWS monitoring tools are designed to detect unusual or unauthorized activities and conditions at ingress and egress communication points These tools monitor server and n etwork usage port scanning activities application usage and unauthorized intrusion attempts The tools have the ability to set custom performance metrics thresholds for unusual activity Systems within AWS are extensively instrumented to monitor key operational metrics Alarms are configured to automatically notify operations and management personnel when early warning thresholds are crossed on key operational metrics An oncall schedule is used so personnel are always available to respond to operationa l issues This includes a pager system so alarms are quickly and reliably communicated to operations personnel Documentation is maintained to aid and inform operations personnel in handling incidents or issues If the resolution of an issue requires collaboration a conferencing system is used which supports communication and logging capabilities Trained call leaders facilitate communication and progress during the handling of operational issues that require collaboration Post mortems are convened after any significant operational issue regardless of external impact and Cause of Error (COE) documents are drafted so the root cause is captured and preventative actions are taken in the future Implementation of the preventative measures is tracked during weekly operations meetings AWS security monitoring tools help identify several types of denial of service (DoS) attacks including distributed flooding and software/logic attacks When DoS attacks are identified the AWS incident response process is initiated In addition to the DoS prevention tools redundant Page 39 of 47 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 41 of 47 telecommunication providers at each region as well as additional capacity protect against the possibility of DoS attacks Assurance approach The external interface protection principle and related processes within AWS services are subject to audit at least annually under ISO 27001:2013 AICPA SOC 1 SOC 2 SOC 3 and PCIDSS certification programs These certifications are recognised by ENISA under the Cloud Certification Schemes The controls in relation to external interface protection are validated independently at least annually under the certification programs Based on the alternatives provided for selection within Cloud Security Principles guidance AWS uses Service Provide r Assertion in respect of region specific requirements Principle 12: Secure service administration Implementation approach User Access Procedures exist so that Amazon employee and contractor user accounts are added modified or disabled in a timely manner and are reviewed on a periodic basis In addition password complexity settings for user authentication to AWS systems are managed in compliance with Amazon’s Corporate Password Policy Account Provisioning The responsibility for provisioning employee and contractor access is shared across Human Resources (HR) Corporate Operations and Service Owners A standard employee or contractor account with minimum privileges is provisioned in a disabled state when a hiring manager submits his or her new employe e or contractor onboarding request in Amazon’s HR system The account is automatically enabled when the employee’s record is activated in Secure service administration The methods used by the service provider’s administrators to manage the operational service should be designed to mitigate any risk of exploitation that could undermine the security of the service The security of a cloud service is closely tied to the security of the service provider’s administration systems Access to service administra tion systems gives an attacker high levels of privilege and the ability to affect the security of the service Therefore the design implementation and management of administration systems should reflect their higher value to an attacker Implementation objectives Consumers have sufficient confidence that the technical approach the service provider uses to manage the service does not put their data or service at risk https://wwwgovuk/government/publications/impleme nting thecloud security principles/implementing the cloudsecurity principles#principle12secure service administration This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 42 of 47 Amazon’s HR system First time passwords are set to a unique value and are required to be changed on first use Access to other resources including Services Host Network devices and Windows and UNIX groups is explicitly approved in Amazon’s proprietary permission management system by the appropriate owner or manager Requests for changes in access are captured in the Amazon permissions management tool audit log When changes in an employee’s job function occur continued access must be explicitly approved to the resource or it will be automatically revoked Periodic Account Review Accounts are reviewed every 90 days; explicit reapproval is required or access to the resource is automatically revoked Access Removal Access is automatically revoked when an employee’s record is terminated in Amazon’s HR system Windows and UNIX accounts are disabled and Amazon’s permission management system removes the user from all systems Password Policy Access and administration of logical security for Amazon relies on user IDs passwords and Kerberos to authenticate users to services resources and devices as well as to authorize the appropriate level of access for the user AWS Security has established a password policy with required configurations and expiration intervals Administrators with a business need to access the management plane are required to use multifactor authentication to gain access to purpose built administration hosts These administrative hosts are systems that are specifically designed built configured and hardened to protect the management plane of the cloud All such access is logged and audited When an employee no longer has a business need to access the management plane the privileges and access to these hosts and relevant systems are revoked Assurance approach The secure service administration principl e and related processes within AWS services are subject to audit at least annually under ISO 27001:2013 AICPA SOC 1 SOC 2 SOC 3 and PCIDSS certification programs These certifications are recognised by ENISA under the Cloud Certification Schemes The controls in relation to secure service administration are validated independently at least annually under the certification programs Based on the alternatives provided for selection within Cloud Security Principles guidance AWS uses Service Provider Assertion in respect of region specific requirements This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 43 of 47 Audit information provision to consumers Consumers should be provided with the audit records they need to monitor access to their service and the data held within it The type of audit information available to consumers will have a d irect impact on their ability to detect and respond to inappropriate or malicious usage of their service or data within reasonable timescales Implementation objectives Consumers are: •Aware of the audit information that will be provided to them how and when it will be made available to them the format of the data and the retention period associated with it •Confident that the audit information available will allow them to meet their needs for investigating misuse or incidents https://wwwgovuk/government/publications/imple menting thecloud security principles/implementing thecloud security principles#principle 13audit information provision toconsumers Principle 13: Audit information provision to consumers Implementation approach AWS CloudTrail is a service that provides audit records for AWS customers and delivers audit information in the form of log files to a specified storage bucket The recorded information includes the identity of the API caller the time of the API call the source IP address of the API caller the request parameters and the response elements returned by the AWS service Cloud Trail provides a history of AWS API calls for customer accounts including API calls made via the AWS Management Console AWS SDKs command line tools and higher level AWS services (such as AWS CloudFormation) The AWS API call history produced by CloudTr ail enables security analysis resource change tracking and compliance auditing The logfile objects written to S3 are granted full control to the bucket owner The bucket owner thus has full control over whether to share the logs with anyone else This feature enables AWS customers and provides confidence to meet their needs for investigating service misuse or incidents More details on AWS CloudTrail and further information on audit records can be requested at http://awsamazoncom/cloudtrail A latest version of CloudTrail User Guide is available at http://awsdocss3amazonawscom/awscloudtrail/latest/awscloudtrail ugpdf Assurance approach The audit information provision to the consumer’s principle and related processes within AWS services are subject to audit at least annually under ISO 27001:2013 AICPA SOC 1 SOC 2 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 44 of 47 SOC 3 and PCIDSS certification programs These certifications are recognised by ENISA under the Cloud Certification Schemes The controls in relation to audit information provision to consumers are validated independently at least annually under the certification programs Based on the alternatives provided for selection within Cloud Security Principles guidance AWS uses Service Provider Assertion in respect of region specific requirements Principle 14: Secure use of the service by the consumer Implementation approach AWS has implemented various methods of external communication to support you and the wider customer base and the community AWS has published a public Acceptable Use Policy that provides guidan ce and informs consumers on acceptable use of AWS services This policy includes guidance on illegal harmful or offensive content security violations network abuse and e mail or message abuse with information on monitoring and enforcement of the policy Additionally guidance is provided on reporting violations of the Acceptable Use Policy Mechanisms are in place to allow the customer support team to be notified of operational issues that impact the customer experience A "Service Health Dashboard" is Secure use of the service by the consumer Consumers have certain responsibilities when using a cloud service in order for their use of it to remain secure and for their data to be adequately protected The security of cloud services and the data held within them can be undermined by poor use of the service by consumers The extent of the responsibility on the consumer for secure use of the service will vary depending on the deployment models of the cloud service specific features of an individual service and the scenario in which the consumers intend to the use the service IaaS and PaaS offerings are likely to require the consumer to be responsible for significant aspects of the security of their service Imple mentation objectives • The consumer understands any service configuration options available to them and the security implications of choices they make • The consumer understands the security requirements on their processes uses and infrastructure related to the use of the service The consumer can educate those administrating and using the service in how to use it safely and securely https://wwwgovuk/government/publications/implementing thecloud security principles/implementing thecloud security principles#principle 14secure useoftheserviceby the consumer This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 45 of 47 available and maintained by the customer support team to alert customers to any issues that may be of broad impact The AWS Security Center is available to provide you with security and compliance details about AWS Customers can also subscribe to AWS Support offerings that include direct communication with the customer support team and proactive alerts to any customer impacting issues Using the Trusted Advisor Tool Some AWS Support plans include access to the Trusted Advisor tool which offers a one view snapshot of your service and helps identify common security misconfigurations suggestions for improving system performance and underut ilized resources Trusted Advisor checks for compliance with the following security recommendations: • Limited access to common administrative ports to only a small subset of addresses This includes ports 22 (SSH) 23 (Telnet) 3389 (RDP) and 5500 (VNC) • Limited access to common database ports This includes ports 1433 (MSSQL Server) 1434 (MSSQL Monitor) 3306 (MySQL) Oracle (1521) and 5432 (PostgreSQL) • IAM is configured to help ensure secure access control of AWS resources • Multi factor authentication (MFA) token is enabled to provide twofactor authentication for the root AWS account Assurance approach The secure use of the service by the consumer principle and related processes are not validated independently within AWS compliance programs Based on the alternatives provided for selection within Cloud Security Principles guidance the controls in relation to secure use of the service by the consumer do not exist within the existing certification programs for them to be validated independently AWS publishe s guidance on configuration options and the relative impacts on security regularly through various communication channels like local summit sessions webinars blogs and training and guidance documents AWS uses Service Provider Assertion in respect of regionspecific requirements This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 46 of 47 Conclusion The AWS cloud platform provides a number of important benefits to UK public sector organisations and enables you to meet the objectives of the fourteen Cloud Security Principles While AWS delivers these benefits and advantages through our services and features the individual public sector organisations are ultimately responsible for risk management decisions relating to the use of secure cloud services for OFFICIAL information Using the information presented in this whitepaper we encourage you to use AWS services for your organisations to manage security and the related risks appropriately For AWS security is always our top priority We deliver services to hundreds of thousands of businesses including enterpri ses educational institutions and government agencies in over 190 countries Our customers include government agencies financial services and healthcare providers who leverage the benefits of AWS while retaining control and responsibility for their data including some of their most sensitive information AWS services are designed to give customers flexibility over how they configure and deploy their solutions as well as control over their content including where it is stored how it is stored and who has access to it and the security configuration environment AWS customers can build their own secure applications and store content securely on AWS Additional Resources To help customers further understand how they can address their privacy and data protection requirements customers are encouraged to read the risk compliance and security whitepapers best practices checklists and guidance published on the AWS website This material can be found at: • AW S Compliance: http://awsamazoncom/compliance • AWS Security Center: http://awsamazoncom/security AWS also offers training to help customers learn how to design develop and operate available efficient and secure applications on the AWS cloud and gain proficiency with AWS services and solutions We offer free instructional videos selfpaced labs and instructor led classes Further information on AWS training is available at http://awsamazoncom/training/ AWS certifications certify the technical skills and knowledge associated with best practices for building secure and reliable cloud based applications using AWS technology Further information on AWS certifications is available at http://awsamazoncom/certification/ If further information is required please contact AWS at: https://awsamazoncom/contact us/ or contact the local AWS account representative This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 47 of 47 Appendix – AWS Platform Benefits When designing and implementing large cloud based applications it’s important to consider how infrastructure will be managed to ensure the cost and complexity of running such systems is minimized When organisations first begin using AWS platform it is easy to manage EC2 instances just like regular virtualised servers running in a data center However as the architecture evolves and changes are made over time the instances will inevitably begin to diverge from their original specification which can lead to inconsistencies with other instances in the same environment This divergence from a known baseline can become a huge challenge when managing large fleets of instances across multiple environments Ultimately it will lead to service issues because these environments will become less predictable and more difficult to maintain The AWS platform provides a rich and diverse set of tools to address this challenge with a different approach By using the AWS platform and features public sector organisations can specify and manage the desired end state of the infrastructure independently of the instances and other running components When technology teams start to think of infrastructure as being defined independently of the running instances and other components in the environments they can take greater advantage of the benefits of dynamic cloud environments: Software def ined infrastructure – By defining infrastructure using a set of software artifacts many of the tools and techniques that are used when developing software components can be leveraged This includes managing the evolution of infrastructure in a version control system as well as using continuous integration (CI) processes to continually test and validate infrastructure changes before deploying them to production Auto Scaling and selfhealing – If new instances are provisioned automatically from a consistent specification Auto Scaling groups can be used to manage the number of instances in an EC2 fleet For example a condition to add new EC2 instances in increments can be set to the Auto Scaling group when the average utilization of EC2 fleet is high Auto Scaling can also be used to detect impaired EC2 instances and unhealthy applications and replace the instances without intervention Fast environment provisioning – Consistent environments can be provisioned quickly and easily which opens up new ways of working within teams For example a new environment can be provisioned to allow testers to validate a new version of an application in their own personal test environments that are isolated from other changes Reduce costs – Now that environments can be provisioned quickly the option is always there to remove them when they are no longer needed This reduces costs because customers are charged only for the resources that are used This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Using AWS in the context of NCSC UK’s Cloud Security Principles October 2016 Page 48 of 47 Blue green deployments – Application teams can deploy new versions of application by provisioning new instances (containing a new version of the code) beside the existing infrastructure Traffic can be switched between environments in an approach known as blue green deployments This has many benefits over traditional deployment strategies including the ability to quickly and easily roll back a deployment in the event of an issue In addition to the implementation and assurance approaches detailed in this whitepaper for each Cloud Security Principle public sector organisations adopting cloud technologies should take into consideration the additional benefits of AWS platform within the risk assessment and management frameworks Whilst a secure and compliant public cloud environment is necessary for handling government OFFICIAL information the AWS platform and securi ty features that scale and enable resilience to change are equally important to consider
General
Migrating_Oracle_Database_Workloads_to_Oracle_Linux_on_AWS
This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Migrating Oracle Database Workloads to Oracle Linux on AWS Guide January 2020 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor d oes it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Contents Overview 1 Amazon RDS 1 Oracle Linux AMI on AWS 2 Support an d Updates 3 Lift and Shift to AWS 4 Migration Path Matrix 5 Migration Paths 6 Red Hat Linux to Oracle Linux 6 SUSE Linux to Oracle Linux 6 Microsoft Windows to Oracle Linux 7 Migration Methods 7 Amazon EBS Snaps hot 7 Oracle Data Guard 9 Oracle RMAN Transportable Database 11 Oracle RMAN Cross Platform Transportable Database 11 Oracle Data Pump Export/Import Utilities 12 AWS Database Migration Service 12 Other Database Migration Methods 13 Enterprise Application Considerations 13 SAP Applications 13 Oracle E Business Suite 15 Oracle Fusion Middleware 17 Conclusion 17 Contributors 17 Document Revisions 17 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers About this Guide Oracle databases can run on different operating systems (OS) in on premises data centers such as Solaris (SPARC) IBM AIX and HP UX Amazon Web Services (AWS) supports Oracle Linux 64 and higher for Oracle databases This guide highlights the migration p aths available between different operating systems to Oracle Linux on AWS These migration paths are applicable for migrations from any source —onpremises AWS or other public cloud environments This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 1 Overview Oracle workloads benefit tremendously from many features of the AWS Cloud such as scriptable infrastructure instant provisioning and de provisioning scalability elasticity usage based billing managed database services and the ability to support a wide variety of operating systems (OSs) When migrating your workloads choosing which operating system to run them is a crucial decision We highly recommend that you choose an Oracle supported operating system to run Oracle software on AWS You can use the follow ing Oracle supported operating systems on AWS: • Oracle Linux • Red Hat Enterprise Linux • SUSE Linux Enterprise Server • Microsoft Windows Server Specific Oracle supported operating systems can be used for specific database middleware and application workloads For example SAP workloads on AWS require that Oracle Database be run on Oracle Linux 64 or higher You have many methods for migrating your Oracle databases to Oracle Linux on AWS This guide documents the different migration paths available for the va rious source operating systems It covers migrations from any source —onpremises AWS or other public cloud environments Each migration path offers distinct advantages in terms of downtime and human effort You can choose the best migration path for your business based on your specific needs Amazon RDS For most workloads a managed database service is the preferred method Amazon Relational Database Service (Amazon RDS) is a managed service that makes it easy to set up operate and scale a relational database in the cloud It provides cost efficient and resizable capacity while automating time consuming administration tasks such as hardware provisioning database setup patching and backups It frees you to focus on your applications so you can give them the fast performance high availability security and compatibility they need Amazon RDS is available on several database instance types —optimized fo r memory perform ance or I/O In addition Amazon RDS provides you with six familiar database This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 2 engines to choose from including Amazon Aurora PostgreSQL MySQL MariaDB Oracle and Microsoft SQL Server You can use the AWS D atabase Migration Service (AWS DMS) to easily migrate or replicate your existing databases to Amazon RDS Amazon RDS for Oracle supports Oracle Database Enterprise Edition Standard Edition Standard Edition 1 and Standard Edition 2 Amazon RDS Oracle Sta ndard Editions support both Bring Your Own License (BYOL) and License Included (LI) If you are exploring other database platforms Amazon RDS offers you a choice of database engines and tools such as AWS D atabase Migration Service (AWS DMS) and AWS Schema Conversion Tool (AWS SCT) to make the migration process easier Oracle Linux AMI on AWS If you choose not to use a managed database and instead manage the Oracle database yourself you can deploy it on Amazon Elastic Compute Cloud (Amazon EC2) Oracle Linux EC2 instances can be launched using an Amaz on Machine Image (AMI) available in the AWS Marketplace or as a Community AMI You can also bring your own Oracle Linux AMI or existing Oracle Linux license to AWS In that case y our technology stack is similar to the one used by Amazon RDS for Oracle wh ich also runs on Linux based operating systems Use migration tools such as Oracle Data Pump Export/Import or AWS DMS These tools take care of migration from different OS platforms to EC2 and/or RDS for Oracle The AWS Marketplace listing for Oracle Linux is through third party vendors You will find a list of Community AMIs and Public AMIs by searching for the term “OL6” or “OL7” Public AMI listings are available in the EC2 section of the AWS Management Console under Images then AMI Two types of AMIs a re available for the same release version: • Hardware Virtual Machine (HVM) • Paravirtual Machine (PVM) HVM is an approach that uses virtualization features of the CPU chipset If a virtual machine runs in HVM mode the kernel of the OS may run unmodified PVM does not use virtualization features of the CPU chipset PVM uses a modified kernel to achieve virtualization AWS supports both HVM and PVM AMIs The Unbreakable Enterprise Kernel for Oracle Linux natively includes PV drivers SAP has specific recommendations of HVM virtualized AMIs for SAP installations The Oracle This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 3 Linux AMI published by Oracle are available in the list of Community AMIs in AWS Mark etplace Community AMIs do not have any official support Refer to the following table for some of the AMI listings: Table 1: Community AMIs Version AMI Oracle Linux 73 HVM OL73 x86_64 HVM Oracle Linux 73 PVM OL73 x86_64 PVM Oracle Linux 72 HVM OL72 x86_64 HVM Oracle Linux 72 PVM OL72 x86_64 PVM Oracle Linux 67 HVM OL67 x86_64 HVM Oracle Linux 67 PVM OL67 x86_64 PVM Anyone can upload and share an AMI Use caution when selecting an AMI Reach out to AWS Business Support or your vendor support for assistance In addition to an existing AMI you can import your own virtual machine images as AMIs in AWS Refer to the VM Import/Export page for more details This option is highly useful when you have heavily customized virtual machine images available in other cloud environments or your own data center Support and Updates Oracle offers Basic Basic Limited Premier and Premier Limited commercial support for Oracle Linux EC2 instances Refer to Oracle’s cloud license document for the in stance requirements The following table shows the level of support available for various AMI options Table 2: Support levels Option Support level AWS Marketplace Basic Support and Basic Limited This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 4 Option Support level BYOL (Bring Your Own License) Basic Basic Limited (up to 8 virtual cores) Premier Premier Limited (up to 8 virtual cores) Community AMI No commercial support If you have an Oracle Linux support contract you can register your EC2 instance using the uln_register command on your EC2 instance This command requires you to have access to an Oracle Linux CSI number Review the Oracle Linux Unbreakable Linux Network (ULN) user guide on the steps for ULN channel subscription and how to register your Oracle Linux instance Oracle Linux instances require intern et access to the public yum repository or Oracle ULN in order to download packages All Oracle Linux AMIs can access the public yum repository Only licensed Oracl e Linux systems can access the Oracle ULN repository If the EC2 instance is on a private subnet use a proxy server or local yum repository to download packages Oracle Linux systems (OL6 or higher) work with the Spacewalk system for yum package management A Spacewalk system can be in a public subnet while Oracle Linux systems can be in a private subnet The following sections detail migration path methods availa ble for Oracle databases These migration methods are available for Oracle 10g 11g 12c and 18c For other Oracle products see the respective product support notes in Oracle’s MyOracleSupport portal Lift a nd Shift to AWS Existing Oracle workloads can be migrated from existing on prem or virtualized environment to Amazon EC2 with no changes required (Lift and Shift) using CloudEndure Migration CloudEndure Migration executes a highly automated machine conversion and orchestration process allowing even the most complex applications and databases to run natively in AWS without compatibility issues CloudEndure Migration uses a continuous block leve l replication process Servers are replicated to a staging area temporarily until you are ready to cut over to your desired instance target This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 5 CloudEndure Migration replicates your existing server infrastructure via its client software as a background proces s without application disruption or performance impact Once replication is complete CloudEndure Migration allow s you to cut over your servers to the instance family and type of your choice via customized blueprints Using your blueprint you can test you r deployment before committing to an instance family and type CloudEndure Migration supports Oracle Linux Redhat Linux Windows Server and SUSE Linux For detailed version compatibility information see Supported Operating Systems CloudEndure Migration is provided at no cost for migrations into AWS Migration Pat h Matrix A migration path matrix assumes that only the operating systems change and other software versions remain the same We recommend that you change other components such as the Oracle database version or Oracle database patching separately to avoid complexity The database version and any other application version in both source and target EC2 instances should remain the same to prevent deviations in the migration path There are also vendor data replication and migration tools available that can su pport platform migration See the Migration Methods section for the list of methods Table 3: Migration methods Source database operating system Migration methods Red Hat Linux Amazon EBS snaps hot Oracle Data Guard SUSE Linux Amazon EBS snapshot Oracle Data Guard Microsoft Windows Oracle Data Guard 11g RMAN Transportable Tablespace HPUX Solaris (SPARC) RMAN Cross platform Transportable Tablespace This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 6 Migration Paths This section presents three paths for migrating to Oracle Linux on AWS Red Hat Linux to Oracle Linux Oracle Linux and Red Hat Linux are compatible operating systems When migrating from Red Hat Linux to Oracle Linux migrate to the same version level for example Red Hat Linux 64 to Oracle Linux 64 or Red Hat Linux 72 to Oracle Linux 72 Also ensure that both operating systems are patched to the same level You can migrate Red Hat Linux to Oracle Linux using either of these methods : • Amazon Elastic Block Store (Amazon EBS) snapshot • Oracle Data Guard An EBS snapshot is a faster migration method than Oracle Data Guard for non Oracle Automatic Storage Management (ASM) databases If your databases use Oracle ASM then Oracle Data Guard is a bett er choice Other standard methods such as the Oracle Recovery Manager (RMAN) and Oracle Export and Import utilities can work across operating systems However these methods require a large r downtime and a greater amount of human effort Choose the Export and Import utilities method if your specific use case requires it See the Migration Methods section for details on each migration method SUSE Linux to Oracle Linux SUSE Linux Enterprise Server (SLES) is an enterprise grade Linux offering from SUSE Oracle Linux and SUSE Linux are binary compatible That is you can move an executable directly from SUSE Linux to Oracle Linux and it will work It must match the same C compiler and bit architecture (32 bit or 64 bit) SLES follows a different versioning scheme than Oracle Linux so there is no easy way to match similar operating system versions Additionally the Linux kernel version gcc versions and bit architecture must match Contact SLES Technical Support to find which Oracle Linux version is compatible with the SLES operation system SLES Linux can also be migrated using EBS snapshots and Oracle Data Guard just as you can do with Red Hat Linux Again these metho ds have less downtime and require less human effort than Oracle RMAN or Oracle Export/Import This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Data base Workloads to Oracle Linux on AWS 7 An EBS snapshot is a much quicker and simpler method than Oracle Data Guard Whichever method you select we recommend that you don’t copy the binaries from SLES but rather perform a fresh Oracle home installation on your Oracle Linux EC2 instance The reason for this recommendation is to properly generate the Oracle Inventory directory (oraInventory) in the new Oracle Linux EC2 instance and also have the files cr eated by rootsh Simply copying Oracle home may not create oraInventory and rootsh may not create the new files Also ensure the patch level of the newly created database binary home is exactly the same as the one in the SLES instance See the Migration Methods section for details on each migration method Microsoft Windows to Oracle Linux Microsoft Windows is a completely different operating system than the various types of Linux operating systems The following mi gration methods are available for Windows: • Oracle Data Guard (heterogeneous mode) • Oracle RMAN transportable tablespace (TTS) backup and restore The Oracle Data Guard method requires much less downtime compared to the Oracle RMAN TTS method The RMAN TTS me thod still requires copying the files from your onpremises data center or source database servers to AWS Files of significant size will extend the migration time There are several methods available such as AWS Import/Export and AWS Snowball which can handle the migration of large volumes of files Transferring large volume of files over the network takes time AWS Import/Export and AWS Snowball can help by migrating the data offline using physical media devices See the Migration Methods section for details on each migration method Migration Methods Your choice of migration method depend s on your specific use case and context Repeated testing and validation is necessary before finalizing and performing on the production workload Amazon EBS Snapshot An EBS snapshot is a storage level backup mechanism It preserves the contents of the EBS volume as a point intime copy If you are migrating databases from RHEL or This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 8 SUSE to Oracle Linux EBS snapshot is one of the fastest migration methods This method is applicable only if the source database is already on AWS and running on Oracle EBS storage It is not applicable for on premises databases or non AWS Cloud services The high level migration steps are: 1 Create a new Amazon EC2 instance based on Oracle Linu x AMI 2 Install an Oracle home on the new Oracle Linux EC2 instance 3 Create the new database parameter files and TNS files 4 Take an EBS snapshot of the volumes in the older EC2 instance (Red Hat Linux SUSE Linux) If possible we recommend that you take an EBS snapshot during downtime or off peak hours 5 Create a new volume based on the EBS snapshot and mount it on your Oracle Linux EC2 instance 6 Perform the post migration steps such as verifying directory and file permissions 7 Start the Oracle datab ase on the Oracle Linux EC2 instance You can take a snapshot of the Oracle home as well as the database files However we recommend that you install Oracle home binaries separately on the new Oracle Linux EC2 instance The Oracle home installation create s a few files in operating system root that may not be available if you create a snapshot and mount the binary home The EBS snapshot can be taken while the database is running but the snapshot will take longer to complete Conditions for Taking an Amazon EBS Snapshot • When you create the new volume on the target Oracle Linux EC2 instance ensure that the volume has the same path as the source EC2 instance If database files reside in the /oradata mount in the source EC2 instance the newly created volume fr om the snapshot should be mounted as /oradata in the target Oracle Linux EC2 instance It is also recommended but not required to keep the Oracle database binary home the same between source and target EC2 instances • The Unix ID number for the Oracle use r and the dba and oinstall groups should be the same number as the source operating system This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 9 For example the Oracle Linux 11g/12c pre install rpm creates an Oracle user with Unix ID number 54321 which may not be the same as the source operating system ID If it is different change the Unix ID number so that both source and target EC2 instances match • An EBS snapshot works well if all the database files are in the single EBS volume The complexity of an EBS snapshot increases when you use multiple EBS volu mes or you use Oracle ASM Refer to Oracle MOS Note 6046831 for recovering crash consistent snapshots Oracle 12c has additional features to recover from backups taken from crash consistent snapshots For more details see Amazon EBS Snapshots Oracle Data Guard Oracle Data Guard tech nology replicates the entire database from one site to another It can do physical replication as well as logical replication Oracle Data Guard operates in homogen eous mode if the primary and standby database operating systems are the same The normal Ora cle Data Guard setup would work in this case However if you are migrating from 32 bit to 64 bit or from AMD to Intel processors or vice versa it is considered to be a heterogeneous migration even if the operating system is the same Heterogeneous mode requires additional patches and steps while operating Oracle Data Guard Homogen eous Mode In homogen eous mode the source and destination operating systems are the same Oracle Data Guard send s the changes from the primary (source) database to the standby database If physical replication is set up the changes of the entire database are captured in redo logs These changes are sent from the redo logs to the standby database The standby database can be configured to apply the changes immediately or at a d elayed interval If logical replication is set up the changes are captured for a configured list of tables or schemas Logical replication does not work for the use case of migrating the entire database unless your situational constraints require it See the Oracle Data Guard Concepts and Administration Documentation for both physical and logical standby setups Heterogeneous Mode In heterogeneous mode Oracle Data Guard allows primary and standby databases in different operating systems and different binary levels (32 bit or 64 bit) Until Oracle This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 10 11g Oracle Data Guard required that both primary and standby databases have the same operating system level From 11g onward Oracle Data Gua rd has been in heterogeneous mode This allows Oracle Data Guard to support mixed mode configurations The source primary database can have a different operating system or binary level Heterogeneous set up of Oracle Data Guard is recommended for large and very large databases We present a few suggestions below which can further optimize your migration It is essential that Oracle database home on Windows and Linux has the latest supported version of the database (11204 or 12102) along with latest q uarterly patch updates Multiple migration issues were fixed in the latest patch updates Due to the mixed operating systems in the migration path we recommend that you use the Data Guard command line interface (DGMGRL) to set up Oracle Data Guard and perform role transition See Oracle MOS Note 4134841 for more details on using Oracle Data Guard to transition from Microsoft Windows to Linux This migration requires some additional patches which are detailed in the Note Also see MOS Note 4140431 for the role transition when you migrate from Windows 32 bit to Oracle Linux 64 bit Detailed steps for setting up Oracle Data Guard between Windows and Linux i s available in Oracle MOS Note 8814211 To set up Oracle Data Guard between Windows and Linux Oracle mentions the RMAN Active Duplicate method However this method impacts source database performance and creates heavy network traffic between source and target database servers An alternative method for Active Duplicate is to use the RMAN cross platform backup method (Oracle MOS Note 10795631 ): 1 Take an EBS snapshot of the Oracle database on Windows Mount it in another Windows server in STARTUP MOUNT stage 2 Create an RMAN cold backup of the newly mounted Oracle database on Windows This step is to avoid error as mentioned in Oracle MOS Note 20033271 3 Copy the RMAN backup files to Linux using SFTP or SCP 4 On Oracle Linux issue the dup licate database for standby command using RMAN backup files This step replaces the duplicate command in Step 3 of Oracle MOS Note 10795631 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 11 DUPLICATE TARGET DATABAS E FOR STANDBY BACKUP LOCATION='<full path of RMAN backup file location in Oracle Linux>' NOFILENAMECHECK; You can use SQL commands or DGMGRL to start Oracle Data Guard synchronization between the primary database on Windows and the standby database on Orac le Linux Refer to the role transition notes mentioned previously to switch the primary database from Windows to Linux If the source database contains Oracle OLAP refer to Oracle MOS Note 3523061 It is recommended to back up the user created OLAP Analytical Workspace ahead of time using the Export utility Oracle RMAN Transportable Database Oracle recommends the Oracle RMAN TTS method when migrating from completely different operating systems If the un derlying chipset is different such as Sun SPARC and Intel then Oracle recommends you use the cross platform transportable tablespace (XTTS) method Different chipsets have different endian formats Endian format dictates the order in which the bytes are stored underneath The Sun SPARC chipset stores bytes in big endian format while the Intel series stores them in little endian format TTS can be used when both Windows and Oracle Linux are running on same chipset eg Intel 64 bit Oracle has published a detailed blog post to migrate from the Windows (Intel) platform to the Linux (Intel) platform using RMAN TTS This method migrates the entire database at once instead of just individual tablespaces This method involves making your source Windows database read only and requires downtime Hence this method is advised for small and medium sized databases under 400 GB and wherever downtime can be accommodated For large databases run Oracle Data Guard in heterogeneous mode Oracle RMAN Cross Platform Transportable Database If you are migrating from different endian platforms like Sun/HP refer to Oracle MOS Note 3715561 for detailed step bystep instructions This method uses the XTTS method in RMAN It is possible to reduce downtime if you are migrating from Oracle Database 11 g or later using cross platform incremental backup Refer to Oracle MOS Note 13895921 for This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 12 instructions Review the Oracle whitepaper Platform Migration Using Transportable Tablespaces: Oracle Database 11g Release 1 on using RMAN 11g XTTS best practices and recommendations Oracle Data Pump Export/Import Utilities Oracle Data Pump Export/Import utilities can migrate from different endian formats It is a more time consuming method than Oracle RMAN but it is useful when you want to combine it with other variables such as when you want to mig rate certain schemas from Oracle 10g on an HP UX on premises server to Oracle 11g on Oracle Linux on AWS To reduce the downtime leverage parallel methods in Oracle Data Pump Export/ Import See the Oracle whitepaper Parallel Capabilities of Oracle Data Pump for recommendations on how to leverage it AWS Database Migration Service AWS Databas e Migration Service (DMS) is a managed service that you can use to migrate data from on premises or your Oracle DB instance to another EC2 or RDS instance AWS DMS supports Oracle versions 10g 11g 12c and 18c in both the source and the target instances A key advantage of AWS DMS is that it requires minimal downtime AWS SCT can be used together with AWS DMS It analyzes the source database and generates a report on which automatic and manual migration steps will be required for the given source and targe t combination This report helps in planning your migration activities AWS DMS does not migrate PL/SQL objects but AWS SCT helps you locate them and alerts you on the migration step needed You can use Oracle Data Pump Export/Import filters to migrate t he PL/SQL objects AWS DMS supports Oracle ASM at source AWS DMS can also replicate data from the source database to the destination database on an on going basis You can also use it to replicate the data until cutover is complete AWS DMS can use both Oracle LogMiner and Oracle Binary Reader for change data capture See Using an Oracle Database as a Source for AWS DMS for available configuration options and known limitations for source Oracle database This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 13 Other Database Migration Methods There are other methods that can help in database migration across operating system platforms Oracle MOS Note 7332051 provides a generic overview of some of the methods like RMAN Duplicate or Oracle GoldenGate Some enterprise applications have additional tools and migration paths that are specific to their own applications Finally t here are independent software vendors that offer database migration tools on the AWS Marketplace One of these tools may be the best fit for your scenario Enterprise Application Considerations SAP Applications If you’re running your SAP applications with Oracle database you have many methods for migrating from one operating system to another All of the following migration methods are supported by SAP Note: You must follow standard SAP system copy/migration guidelines to perform your migration SAP requires that a heterogeneous migration be performed by SAP certified technical consultants Check with SAP support for more details SAP Software Logistics Toolset Softwa re Provisioning Manager (SWPM) is a Software Logistics (SL) Toolset provided by SAP to install copy and transform SAP products based on SAP NetWeaver AS ABAP and AS Java You can use SWPM to perform both heterogeneous and homogen eous migrations If the e ndian type of your source operating system is the same as the target then your migration is considered a homogen eous system copy Otherwise it is considered a heterogeneous system copy or migration The SWPM tool uses R3load export/import methodology to copy or migrate your database If you need to minimize the migration downtime consider using the parallel export/import method provided by SWPM See the Software Logistics Toolset documentation page for more details Oracle Lifecycle Migration Service Oracle developed a migration service called Oracle ACS Lifecycle Management Service (formerly known as Oracle to Oracle Online [Triple O ] and Oracle to Oracle [O2O ]) to This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 14 help SAP customers migrate their exi sting Oracle database to another operating system With this service you can migrate your database while the SAP system is online which minimizes the downtime required for migration This service uses Oracle’s builtin functionality and Oracle Golden Gate This is a paid service and may require additional licensing to use Oracle Golden Gate See SAP OSS Note 1508271 for more details This service only helps with the database migratio n step —you still need to complete all the other SAP standard migration steps to complete the migration Oracle RMAN For SAP applications you can use native Oracle functionality to migrate your database to another platform You can use the Oracle RMAN tran sportable database feature to migrate the database when the endian type of source and target platform are the same Starting with Oracle 12c Oracle RMAN cross platform transportable database and tablespace features can be used to migrate a database across platforms with different endian types See SAP OSS Notes 105047 and 1367451 for more details Oracle RMAN only helps with the database migration step —you still need to complete all the other SAP standard migration steps to complete the migration The following table summarizes all the migration methods available to migrate your Oracle database to the Oracle Linux platform We recommended that you evaluate all the available methods and choose the one that best suits your env ironment Table 4: Migration options for Oracle database to Oracle Linux Source Operating System Migration Methods to Oracle Linux Oracle RMAN Transportable Database Oracle RMAN Cross Platform Transportable Database Oracle Lifecycle Migration Service (O2O / Triple O) SAP System Copy / Migration with SWPM (R3load Export / Import) RHEL / SLES Yes Yes Yes Yes Oracle Linux Yes Yes Yes Yes Solaris (x86) Yes Yes Yes Yes AIX / HP UX / Solaris (SPARC) No Yes Yes Yes This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 15 Source Operating System Migration Methods to Oracle Linux Oracle RMAN Transportable Database Oracle RMAN Cross Platform Transportable Database Oracle Lifecycle Migration Service (O2O / Triple O) SAP System Copy / Migration with SWPM (R3load Export / Import) Windows No Yes Yes Yes Oracle E Business Suite For Oracle E Business Suite (EBS) applications you can follow the various migration paths previously described in the document The following migration methods are available to migrate the database tier of Oracle E Business Suite: Table 5: Migration methods for Oracle E Business Suite Source Operating System Amazon EBS Snapshot Oracle Data Guard RMAN Transportable Database RHEL Yes Yes Yes SLES Yes Yes Yes Solaris x86 No Yes Yes IBM AIX / HP UX / Solaris SPARC No No No Windows No Yes Yes If you are running on IBM AIX/HP UX/Solaris SPARC consider other database migration methods such as using the Export/Import utilities Once you have migrated your database complete the following post migration steps: • Environment variables in new Oracle home include PERL5LIB PATH and LD_LIBRARY_PATH • Ensure NLS directory $ORACLE_HOME/nls/data/9idata is available in the new Oracle home This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 16 • Implement and run auto config on the new Oracle home Once db tier auto config is complete you must run auto config on the application tier as well RMAN Transportable Database The RMAN transportable database converts the source database and creates new data files compatible for the destination operating system This step involves placing the source database into read only mode RMAN transportable database consumes more downtime One option to minimize downtime is to use physical standby of th e source database for RMAN transportable database conversion step RMAN allows parallel conversion of the data files thereby reducing the conversion time See the Oracle whitepaper Platform Migration Using Transportable Database Oracle Database 11g and 10g Release 2 for more details on platform migration using RMAN transportable database feature Oracle maintains a master note ( Oracle MOS Note 13772131 ) for platform migration • For Oracle EBS 11i see Oracle MOS Note 7293091 • For Oracle EBS R120 and R121 see Oracle MOS Note 7347631 • For Oracle EBS R122 see Oracle MOS Note 20111691 Migrating From 32 Bit to 64 Bit For Oracle EBS applications we recommend that you keep the bit level of the operating systems the same eg RHEL 32 bit to Oracle Linux 32 bit in order to reduce variability in the migration process If there is a driving need to change the bit level o f the operating system during the migration Oracle recommends that you follow a two step approach in migrating the system to 64 bit The two step migration path consists of setting up the application tier and then migrating the database tier See MOS Not e 4715661 for detailed steps and post migrations checks on converting Oracle E Business Suite from 32 bit to 64bit Linux Containers You can move your Oracle E Busin ess Suite R122 application tier to containers running Oracle Linux Linux containers provide the flexibility to scale on demand depending on the workloads The application tier of Oracle E Business Suite 122 is certified on Oracle Linux containers runnin g UEK3 R3 QU6 kernel Oracle EBS application tier containers must be created with a privilege flag See MOS Note 13307011 for further requirements and documentation This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Oracle Database Workloads to Oracle Linux on AWS 17 Oracle Fusion Middleware For Oracle application tier products such as Fusion Middleware refer to the respective MOS Upgrade Support notes for the Oracle recommended path to migrate the OS platform For Fusion Middleware 11g see MOS Support Note 10732061 for the platform migration path For Oracle applications such as Oracle E Business Suite PeopleSoft or similar products check their respective Oracle MOS platform m igration notes or seek direction from the Oracle Support team for the recommended migration path for the particular product and version Conclusion Your choice of migration path depends on your application your specific business needs and your SLAs If y ou are already using AWS Amazon EBS snapshots are the best choice if the prerequisites are satisfied Whichever method you choose for the migration path repeated testing and validation is necessary for a successful and seamless migration Contributors Contributors to this document include : • Bala Mugunthan Sr Partner Solution Architect – Global ISV AWS • John Bentley Technical Account Manager AWS • Jayaraman Vellore Sampathkumar AWS Oracle Solutions Architect AWS • Yoav Eilat Sr Product Marketing Manager AWS Document Revisions Date Description January 2020 Updated for latest technologies and services Month 2018 First publication
General
Homelessness_and_Technology
Homelessness and Technology How Technology Can Help Communities Prevent and Combat Homelessness March 2019 This document has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers ArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents AWS’s current product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or lice nsors AWS’s products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied AWS’s responsibilities and liabilities to its customers are controlled by AWS agreements and this document i s not part of nor does it modify any agreement between AWS and its customers © 201 9 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Introduction 6 Best Practices for Combatting Homelessness 6 Connect Data Sources with Data Lakes 7 Data Lake Solution 9 AWS Lake Formation 9 Enable Data Analytics Using Big Data and Machine Learning Techniques 10 Data Processing and Storage 10 Make Predictions with Machine Learning and Analytics 11 Manage Identity and Vital Records 12 Leverage AWS for HIPAA Compliance 13 HMIS Data Privacy and HIPAA 13 Conclusion 14 Contributors 14 Further Reading 15 Document Revisions 15 ArchivedAbstract The disparate nature of current homeless information management systems limits a community’s ability to identify trends or emerging needs measure internal performance goals and make data driven decisions about the effective deployment of limited resources With the shift in recent years to whole person care there is increasing demand to connect these disparate systems to affect better outcomes In this document we have outlined four pillars of how AWS technology and services can act as a best practice to organizations looking to leverage the cloud for Homeless Management Information Systems (HMIS) These pillars are as follows: • Connect disparate data sources using a data lake design patte rn • Make predictions using data analytics workloads big data and machine learning • Manage identity and vital records for people experiencing or at risk for experiencing homelessness • Leverage the AWS Business Associates Addendum (BAA) and associated services for Health Insurance Portability and Accountability Act (HIPAA) Compliance and NIST based assurance frameworks ArchivedAmazon Web Services Homelessness and Technology Page 6 Introduction Preventing and combatting homelessness depends on a coordinated Continuum of Care (CoC) on the ground locally sharing information across disparate systems and collaborating with the public nonprofit philanthropic and private sector partners The systems that collect this information today (ie homelessness services electronic health records education and criminal justice information systems ) were designed independently to address particular applications and are managed by different entities with separate IT systems and governance The disparate nature of these systems limits a community’s ability to identify trends or emerging needs measure internal performance goals and make data driven decisions about the effective deployment of limited resources With the shift in recent years to whole person care there is increasing deman d to connect these disparate systems to affect better outcomes Redesigning these systems for interoperability is critical but it will take time In the meantime you can use the best practices in this document to connect disparate information today to d evelop a comprehensive view for each client to drive better outcomes and enable analytics that support data drive n decision making Best Practices for Combatting Homelessness The following best practices focus on addressing some of the challenges of comba tting homelessness but they are highly applicable to other socioeconomic and healthcare challenges that cross multiple systems • Connect disparate data sources using a data lake design pattern • Make predictions using d ata analytics workloads big data and machine learning • Manage identity and vital records for people experiencing or at risk or experiencing homelessness • Leverage the AWS Business Associates Addendum (BAA) and associated services for Health Insurance Portability and Accountability Act (HIPA A) Compliance and NIST based assurance frameworks ArchivedAmazon Web Services Homelessness and Technology Page 7 Connect Data Sources with Data Lakes Connecting disparate data sources to create a comprehensive view of the homeless population and their interactions across numerous service providers and government entities can come with many technical challenges Schema and structural differences in separate locations can be difficult to combine and query in a single place Also some data may be highly structured whereas other dataset s may be less structured and involv e a smaller signal to noise ratio For example data stored in a tabular CSV format from a traditional database combined with a nested JSON schema that may come from a fleet of devices (eg personal health records v ersus realtime medical equipment data) can be difficult to join and query together using a relational database alone A data lake is a centralized repository that allows you to store all of your structured and unstructured data at any scale You can store your data as is without having to firs t structure the data and run different types of analytics Dashboards visualizations big data processing real time analytics and machine learning can all help contribute to better decision making and improve client outcomes A data warehouse is a central repository of structured information that can be analyzed to make better informed decisions Data flows into a data warehouse from transactional systems relational databases and other sources typically on a regular cadence Business analysts da ta scientists and decision makers access the data through business intelligence (BI) tools SQL clients and other analytics applications Data warehouses and data lakes complement each other well by allowing separation of concerns and leveraging scalable storage and scalable analytic capability respectively ArchivedAmazon Web Services Homelessness and Technology Page 8 Figure 1: Connecting Disparate Data Sources A Homeless Management Information System (HMIS) is an information technology system used to collect client level data and data o n the provision of housing and services to homeless individuals and families and persons at risk of homelessness You can create data lakes to connect disparate HMIS data across CoC and regional boundaries With a consolidated dataset you gain a comprehen sive and unduplicated understanding of who is served with which programs and to what outcomes across a region or state This depth of understanding reveals patterns that can help care providers rapidly create and tune interventions to the unique needs of homeless groups (eg veterans youth elders chronically homeless and so on ) and provides the public elected officials and funders with transparency about investments versus outcomes By centralizing data and allowing Federated access to a searchabl e data catalog you can address pain points around connecting disparate data systems The data lake can accept data from many different sources These may include but are not limited to: • Existing relational database and data warehouse engines (either on premises or in the cloud) • Clickstream data from mobile or web applications • Internet of Things (IoT) device data • Flat file imports ArchivedAmazon Web Services Homelessness and Technology Page 9 • API data • Media sources such as v ideo and audio streams This data should be stored durably and encrypted with industry standard open source tools both at rest and in transit since the data may contain personally identifiable information (PII) and be subject to compliance controls Federated access through an Identity provider (eg Active Directory Google Facebook etc) should also be used as a means of authorization to enable different teams to access the correct level of data Metadata concerning the data should be held within a searchable data catalog to enable fast access to structural and data classification inform ation This should all be accomplished in a cost effective and scalable manner with the data held in its native format to facilitate export further transformation and analysis Data Lake Solution The Data Lake solution automatically crawls data sources identifies data formats and then suggests schemas and transformations so you don’t have to spend time hand coding data flows For example if you upload a series of JSON files to Amazon Simple Storage Service (Amazon S3) AWS Glue a fully managed extract transform and load (ETL) tool can scan these files and work out the schema and data types present within these files Thi s metadata is then stored in a catalog to be used in subsequent transforms and queries Additionally user defined tags are stored in Amazon DynamoDB a key value document database to add business relevant context to each dataset The solution enables you to create simple governance policies that require specific tags when datasets are registered with the data lake You can browse available datasets or search on dataset attributes and tags to quickly find an d access data relevant to your business needs AWS Lake Formation The AWS Lake Formation service builds on the existing data lake solution by allow ing you to set up a secure data lake within days Once you define where your lake is located Lake Formation collects and catalogs this data moves the data into Amazon S3 for secure access and finally cleans and classifies the data using machine learning algorithms You can then access a centralized data ca talog which describes available dataset s and their appropriate usage This approach has a number of benefits from ArchivedAmazon Web Services Homelessness and Technology Page 10 building out a data lake quickly to simplifying security management and allowing easy and secure self service access Enabl e Data Analytics Using Big Data and Machine Learning Techniques Communities want a better understanding of the circumstances that contribute to homelessness prevent homelessness and accelerate someone’s path out of homelessness These predictions are crit ical inputs for the development of interventions across a continuum of care and for disaster response planning With a data lake communities can build train and tune machine learning models to predict outcomes Data Processing and Storage In today's co nnected world a number of data sources are available to be consumed Some examples include public APIs sensor/device data website analytics imagery as well as traditional forms of data such as relational databases and data warehouses Amazon Relational Database Service ( Amazon RDS) allows developers to build and migrate existing databases into the cloud AWS supports a large range of commercial and open source database engines (eg MySQL PostGres Amazon Au rora Oracle SQL Server) allowing developers freedom to keep their current database or migrate to an open source platform for cost savings and new features Amazon RDS maintains highavailab ility through the use of Multi Availability Zone deployments to ensure that production databases stay operational in the event of a hardware failure For customers with data warehousing needs Amazon Redshift enables developers to query large sets of structured data within Redshift a nd with in Amazon S3 When combined with a business intelligence tool such as Amazon Quick Sight Tableau or Microsoft Power BI you can create powerful data visualizations and gain insights into data that were previously out of reach on legacy IT systems Amazon Kinesis makes it easy to collect process and analyze streaming data Kinesis enables th e construction of real time data dashboards video analytics and stream transformations to filter and query data as it comes into the organization from an array of sources ArchivedAmazon Web Services Homelessness and Technology Page 11 Make Predictions with Machine Learning and Analytics Machine learning can help ans wer complicated questions by making predictions about future events from past data Some examples of machine learning models include image classification regression analysis personal recommendation systems and time series forecasting For a CoC these ca pabilities may seem out of reach but due to the power and scale of the cloud these capabilities are now within anyone’s reach Amazon Comprehend Medical Amazon Forecast and Amazon Personalize put powerful machine learning model creation capabilities int o the hands of developers requiring no machine learning background or servers to manage Amazon Comprehend Medical Amazon Comprehend Medical is a natural language processing service that makes it easy to use natural language processing and machine learning to extract relevant medical information from unstructured text For example you can use Comprehend Medical to identify and search for key terms in a large corpus of health records allowing case officers and medical professionals to look for recurring patterns or key phrases in patient records when providing treatment to homeless individuals Amazon Forecast Amazon Forecast uses machine learning to combine time series data with additional variables to build forecasts You can use Amazon Forecast to predict changes in a homeless population over time Forecast can also consider how other correlating external factors affect the population such as natu ral disasters or severe weather or the introduction of new programs and initiatives Amazon Personalize Amazon Personalize is a machine learning service that makes it easy for developers to create individ ualized recommendations for customers using their applications For example many times individuals at risk of or experiencing homelessness struggle to find assistance programs Navigating these many programs and facilities can be daunting and time consumi ng By using HMIS data from other individuals in similar situations you can build a recommendation engine that suggests relevant programs to individuals and families These recommendations enable them to access programs that they may not be aware of or have the time to research ArchivedAmazon Web Services Homelessness and Technology Page 12 Manag e Identity and Vital Records Proof of identity and eligibility are critical to matching the right people at the right time to the right interventions Copies of vital records such as social security cards birth certificates proof of disability and copies of utility bills lease or property title documents are often required by various programs that are designed to help those experiencing or at risk of experien cing homelessness However without a secure and reliable place to store and access these documents the most vulnerable people are often left the worst off Their lack of documentation can become a barrier to service and extend the length of crisis In ad dition to the need for a secure storage location customers need a mechanism to control and share documents with authorized parties to evaluate eligibility for various programs and/or to verify authenticity This mechanism must track who accesses these documents at what time and in what manner in a cryptographically verifiable immutable way Ledger or blockchain based applications can meet this requirement by storing the interaction event metadata for a document or set of documents in a verifiable ledger This ledger creates a verifiable audit trail that can store all of the events that occur during a document ’s lifetime Amazon Simple Storage Service (Amazon S3) Amazon Simple Storage Service ( Amazon S3) store s objects in the cloud reliably and at scale Using Amazon S3 you can build the substrate for a document storage and retrieval application Amazon S3 has many pertinent security features such as multi factor control of deleting and modifying objects and object versioning Amazon S3 also uses encryption at rest and in transit using industry standard encryption algorithms and a simple HTTPS based API Amazon S3 supports signed URLs so that access to objects can be granted for a limited time Finally Amazon S3 offers cost savings with intelligent tiering so that documents can be automatically moved into different storage tiers depending on their usage patterns Amazon Quantum Ledger Database (Amazon QLDB) Amazon Quantu m Ledger Database ( Amazon QLDB) is a fully managed ledger database that provides a transparent immutable and cryptographically verifiable transaction log owned by a central trusted authority Amazon QLDB tracks each and ArchivedAmazon Web Services Homelessness and Technology Page 13 every application data change and maintains a complete and verifiable history of changes over time Amazon Managed Blockchain Amazon Managed Blockchain is fully managed blockchain service that makes it easy to create and manage scalable blockchain networks using popular open source frameworks such as Hyperledger Fabric and Ethereum By combining secure storage in the cloud with a cryptographically verifiabl e event log it is possible to build a scalable application that can store documents in a secure manner and be able to verify the contents and access patterns to each individual document during its lifetime Leverage AWS for HIPAA Compliance Health Insurance Portability and Accountability Act (HIPAA) compliance concerns the storage and processing of protected health information (PHI) such as insurance and billing information diagnosis dat a lab results and so on HIPAA applies to covered entities (eg health care providers health plans and health care clearinghouses) as well as business associates (eg entities that provide services to a covered entity involving the processing stora ge and transmission of PHI) AWS offers a standardized Business Associates Addendum (BAA) for business associates Customers who execute a BAA may process store and transmit PHI using HIPAA eligible services defined in the AWS BAA such as Amazon S3 Amazon QuickSight AWS Glue and Amazon DynamoDB For a complete list of services see HIPAA Eligible Services Referenc e HMIS Data Privacy and HIPAA Each CoC is responsible for selecting an HMIS software solution that complies with the Department of Housing and Urban Development's (HUD) standards HMIS has a number of privacy and security standards that were developed to protect the confidentiality of personal information while at the same time allowing limited data disclosure in a responsible manner These standards were developed after careful review of the HIPAA standards regarding PHI The Reference Architecture for HIPAA on AWS deploys a model environment that can help organizations with workloads that fall within the scope of HIPAA The reference ArchivedAmazon Web Services Homelessness and Technology Page 14 architecture addresses certain technic al requirements in the Privacy Security and Breach Notification Rules under the HIPAA Administrative Simplification Regulations (45 CFR Parts 160 and 164) AWS has also produced a quick start reference deployment for Standardized Architecture for NIST based Assurance Frameworks on the AWS Cloud This quick start focuses on the NIST based assurance frameworks: • National Institute of Standards and Technology (NIST) SP 800 53 (Revision 4) • NIST SP 800 122 • NIST SP 800 171 • The OMB Trusted Internet Connection (TIC) Initiative – FedRAMP Overlay (pilot) • The DoD Cloud Computing Security Requirements Guide (SRG) This quick start includes AWS CloudFormation templates which can be integrated with AWS Service Catalog to automate building a standardized reference architecture that aligns with the requirements within the controls listed above It also includes a security controls matrix which maps the security controls and requirements to architecture decisions features and configuration of the baseline to enhance your organization’s ability to understand and assess th e system security configuration Conclusion AWS technology can help communities drive better outcomes for citizens using the technology and services included this paper However w e understand that homelessness is fundamentally a human problem —all of these initiatives must have strong backing by forward thinking officials and program managers to make an impact in the lives of those at risk or experiencing homelessness Contributors The following individuals and organizations contributed to this document: • Alistair McLean Sr Solutions Architect AWS • Jessie Metcalf Program Manager AWS ArchivedAmazon Web Services Homelessness and Technology Page 15 • Casey Burns Health and Human Services Leader AWS Further Reading For additional information see the following: • HMIS Data and Technical Standards • Reference Architecture for HIPAA on AWS • Reference Architecture for HIPAA on the AWS Cloud: Quick Start Reference Deployment • Standardized Architecture for NIST based A ssurance Frameworks on the AWS Cloud: Quick Start Reference Deployment • AWS Machine Learning Blog: Create a Question and Answ er Bot with Amazon Lex and Amazon Alexa • AWS Government Education and Non Profits Blog Document Revisions Date Description March 2019 Initial document release Archived
General
Deploying_Microsoft_SQL_Server_on_AWS
ArchivedDeploying Microsoft SQL Server on Amazon Web Services November 2019 This paper has been archived For the latest technical content about the AWS Cloud see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 201 9 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Introduction 1 Amazon RDS for SQL Server 1 SQL Serv er on Amazon EC2 1 Hybrid Scenarios 2 Choosing Between Microsoft SQL Server Solutions on AWS 2 Amazon RDS for Microsoft SQL Server 4 Starting an Amazon RDS for SQL Server Instance 5 Security 6 Performance Management 11 High Availability 15 Monitoring and Management 17 Managing Cost 21 Microsoft SQL Server on Amazon EC2 23 Starting a SQL Server Instance on Amazon EC2 23 Amazon EC2 Security 25 Performance Management 26 High Availability 29 Monitoring and Management 32 Managing Cost 34 Caching 36 Hybrid Scenarios and Data Migration 37 Backups to the Cloud 38 SQL Server Log Shipping Between On Premises and Amazon EC2 39 SQL Server Always On A vailability Groups Between On Premises and Amazon EC2 40 AWS Database Migration Service 42 Comparison of Microsoft SQL Server Feature Availability on AWS 42 ArchivedConclusion 46 Contributors 46 Further Reading 47 Document Revisions 47 ArchivedAbstract This whitepaper explain s how you can run SQL Server databases on either Amazon Relational Database Service (Amazon RDS) or Amazon Elastic Compute Cloud (Amazon EC2) and the advantages of each approach We review in detail how to provision and monitor your SQL Server database and how to manage scalability performance backup and recovery high availability and securi ty in both Amazon RDS and Amazon EC2 We also describe how you can set up a disaster recovery solution between an on premises SQL Server environment and AWS using native SQL Server features like log shipping replication and Always On availability groups This whitepaper helps you make an educated decision and choose the solution that best fits your needs ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 1 Introduction AWS offers a rich set of features to enable you to run Microsoft SQL Server –based workloads in the cloud These f eatures offer a variety of controls to effectively manage scale and tune SQL Server deployments to match your needs This whitepaper discusses these features and controls in greater detail in the following pages You can run Microsoft SQL Server versions on AWS using the following services: • Amazon RDS • Amazon EC2 Note: Some versions of SQL Server are dependent on Microsoft licensing For current supported versions see Amazon RDS for SQL Server and Microsoft SQL Server on AWS Amazon RDS for SQL Server Amazon RDS is a service that makes it eas y to set up operate and scale a relational database in the cloud Amazon RDS automates installation disk provisioning and management patc hing minor and major version upgrades failed instance replacement and backup and recovery of your SQL Server databases Amazon RDS also offers automated Multi AZ (Availability Zone) synchronous replication allowing you to set up a highly available and scalable environment fully managed by AWS Amazon RDS is a fully managed service and your database s run on their own SQL Server instance with the compute and storage resources you specify Backups high availability and failover are fully automated Becau se of these advantages we recommend customers consider Amazon RDS for SQL Server first SQL Server on Amazon EC2 Amazon Elastic Compute Cloud ( Amazon EC2 ) is a service that provides computing capacity in the clou d Using Amazon EC2 is similar to running a SQL Server database onpremises You are responsible for administering the database including backups and recovery patching the operating system and the database tuning of the operating system and database par ameters managing security and configuring high availability ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 2 or replication You have full control over the operating system database installation and configuration With Amazon EC2 you can quickly provision and configure DB instances and storage and you can scale your instances by changing the size of your instances or amount of storage You can provision your databases in AWS Regions across the world to provide low latency to your end users worldwide You are responsible for data replication and recovery across your instances in the same or different Regions Running your own relational database on Amazon EC2 is the ideal scenario if you require a maximum level of control and configurability Hybrid Scenarios You can also run SQL Server workloads in a hybrid environment For example you might have pre existing commitments on hardware or data center space that makes it impractical to be all in on cloud all at once Such commitments don’t mean you can’t take advantage of the scalability availability and cost benefits of running a portion of your workload on AWS Hybrid designs make this possible and can take many forms from leveraging AWS for long term SQL Server backups to running a secondary replica in a SQL Server Always On Availability Group Choosing Between Microsoft SQL Server Solutions on AWS For SQL Server databases both Amazon RDS and Amazon EC2 have advantages and certain limitations Amazon RDS for SQL Server is easier to set up manage and maintain Using Amazon RDS can be more cost effective than running SQL Server in Amazon EC2 and lets you focus on more important tasks such as schema and index maintenance rather than the day today administration of SQL Server and the underlying operating system Alternatively running SQL Server in Amazon EC2 gives you more control flexibility and choice Depending on your application and your requirements you might prefer one over the other Start by considering the capabilities and limitations of your proposed solution as follows: ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 3 • Does your workload fit within the features and capabilities offered by Amazon RDS for SQL Server? We will discuss these in greater detail later in this whitepaper • Do you need high availability and automated failover capabilities? If you are running a production workload high availability is a recommended best practice • Do you have the resources to manage a cluster on an ongoing basis? These activities include backups restores software updates availability data durability optimization a nd scaling Are the same resources better allocated to other business growth activities? Based on your answers to the preceding considerations Amazon RDS might be a better choice if the following is true: • You want to focus on business growth tasks such a s performance tuning and schema optimization and outsource the following tasks to AWS: provisioning of the database management of backup and recovery management of security patches upgrades of minor SQL Server versions and storage management • You need a highly available database solution and want to take advantage of the push button synchronous Multi AZ replication offered by Amazon RDS without having to manually set up and maintain database mirroring failover clusters or Always On Availability Gro ups • You don’t want to manage backups and most importantly point intime recoveries of your database and prefer that AWS automates and manages these processes However running SQL Server on Amazon EC2 might be the better choice if the following is true : • You need full control over the SQL Server instance including access to the operating system and software stack • Install third party agents on the host • You want your own experienced database administrators managing the databases including backups repli cation and clustering • Your database size and performance needs exceed the current maximums or other limits of Amazon RDS for SQL Server • You need to use SQL Server features or options not currently supported by Amazon RDS ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 4 • You want to run SQL Server 2017 on the Linux operating system For a detailed side byside comparison of SQL Server features available in the AWS environment see the Comparison of Microsoft SQL Server Feature Availability on AWS section Amaz on RDS for Microsoft SQL Server For the list of currently Amazon RDS currently supported versions and features see Microsoft SQL Server on Amazon RDS Amazon RDS for SQL Server supports the following editions of Microsoft SQL Server : • Express Edition : This edition is available at no additional licensing cost and is suitable for small workloads or proof ofconcept deployments Microsoft limits the amount of memory and size of the individual databases that can be run on the Express edition This edition is not available in a Multi AZ deployment • Web Edition : This edition is suitable for public internet accessible web workloads This edition is not available in a Multi AZ deployment • Standard Edition : This edition is suitable for most SQL S erver workloads and can be deployed in Multi AZ mode • Enterprise Edition : This edition is the most feature rich edition of SQL Server is suitable for most workloads and can be deployed in Multi AZ mode For a detailed feature comparison between the dif ferent SQL Server editions see Editions and supported features of SQL Server on the Microsoft Developer Network (MSDN) website In Amazon RDS for SQL Server the following features and options are supported depending on the edition of SQL Server: For the most current supported features see Amazon RDS f or SQL Server features • Core database engine features • SQL Server development tools: Visual Studio integration and IntelliSense • SQL Server management tools: SQL Server Management Studio (SSMS) sqlcmd SQL Server Profiles (for client side traces) SQL Server Migration Assistant (SSMA) Database Engine Tuning Advisor and SQL Server Agent ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 5 • Safe Common Language Runtime (CLR) for SQL Server 2016 and below versions • Service Broker • Fulltext search (except semantic search) • Secure Sockets Layer (SSL) connection support • Transparent Data Encryption (TDE) • Encryption of storage at rest using the AWS Key Management Service (AWS KMS) fo r all SQL Server license types • Spatial and location features • Change tracking • Change Data Capture • Always On or Database mirroring (used to provide the Multi AZ capability) • The ability to use an Amazon RDS SQL DB instance as a data source for reporting anal ysis and integration services • Local Time Zone support • Custom Server Collations AWS frequently improve s the capabilities of Amazon RDS for SQL Server For the latest information on supported versions features and options see Version and Feature Support on Amazon RDS Starting an Amazon RDS for SQL Server Instance You can start a SQL Server instance on Amazon RDS in several ways : • Interactively using the AWS Management Console • Programmatically using AWS CloudFormation templates • AWS SDKs and the AWS Command Line Interface (AWS CLI) • Using the PowerShell After the instance has been deployed you can connect to it using standard SQL Server tools Amazon RDS provides you with a Domain Name Service (DNS) endpoint for the server as shown in the following figure To connect to the database u se this DNS ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 6 endpoint as the SQL Server hostname along with the master user name and password configured for the instance Always use the DNS endpoint to connect to the instance because the underlying IP address might change Amazon RDS exposes the Always On AGs availability group listener endpoint for the SQL Server Multi AZ deployment The endpoint is visible in the console and is returned by the DescribeDBInstances API as an entry in the endpoints field You can easily connect to the listener endpoint in order to have faster fa ilover times Figure 1: Amazon RDS DB instance properties Security You can use several features and sets of controls to manage the security of your Amazon RDS DB instance These controls are as follows: • Network controls which determine the network configuration underlying your DB instance • DB instance access controls which determine administrative and management access to your RDS resources • Data access controls which determine access to the data stored in your RDS DB instance databases ArchivedAmazon Web Services Deploying Microsoft SQL Se rver on Amazon Web Services Page 7 • Data at rest protection which affects the security of the data stored in your RDS DB instance • Data in transit protection which affects the security of data connections to and from your RDS DB instance Network Controls At the network layer controls are on th e deployed instance EC2VPC level EC2VPC allows you to define a private isolated section of the AWS Cloud and launch resources within it You define the network topology the IP addressing scheme and the routing and traffic access control patterns Newe r AWS accounts have access only to this networking platform In EC2 VPC DB subnet groups are also a security control They allow you to narrowly control the subnets in which Amazon RDS is allowed to deploy your DB instance You can control the flow of net work traffic between subnets using route tables and network access control lists (NACLs) for stateless filtering You can designate certain subnets specifically for database workloads without default routes to the internet You can also deny non database traffic at the subnet level to reduce the exposure footprint for these instances Security groups are used to filter traffic at the instance level Security groups act like a stateful firewall similar in effect to host based firewalls such as the Microso ft Windows Server Firewall The rules of a security group define what traffic is allowed to enter the instance (inbound) and what traffic is allowed to exit the instance (outbound) VPC security groups are used for DB instances deployed in a VPC They can be changed and reassigned without restarting the instances associated with them For improve d security we recommend restricting inbound traffic to only database related traffic (port 1433 unless a custom port number is used) and only traffic from known s ources Security groups can also accept the ID of a different security group (called the source security group) as the source for traffic This approach makes it easier to manage sources of traffic to your RDS DB instance in a scalable way In this case y ou don’t have to update the security group every time a new server needs to connect to your DB instance; you just have to assign the source security group to it Amazon RDS for SQL Server can make DB instances publicly accessible by assigning internet routable IP addresses to the instances In most use cases this approach is not needed or desired and we recommend setting this option to No to limit the potential threat In cases where direct access to the database over the public internet is needed ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 8 we rec ommend limiting the sources that can connect to the DB instance to known hosts by using their IP addresses For this option to be effective the instance must be launched in a subnet that permits public access and the security groups and NACLs must permit inbound traffic from those sources DB instances that are exposed publicly over the internet and have open security groups accepting traffic from any source might be subject to more frequent patching Such instances can be force patched when security pat ches are made available by the vendors involved This patching can occur even outside the defined instance maintenance window to ensure the safety and integrity of customer resources and our infrastructure Although there are many ways to secure your data bases we recommend using private subnet(s) within a VPC no possible direct internet access DB Instance Access Controls Using AWS Identity and Access Management (IAM) you can manage access to your Amazon RDS for SQL Server instances For example you can authorize administrators under your AWS account (or deny them the ability) to create describe modify or delete an Ama zon RDS database You can also enforce mult ifactor authentication (MFA) For more information on using IAM to manage administrative access to Amazon RDS see Authe ntication and Access Control for Amazon RDS in the Amazon RDS User Guide Data Access Controls Amazon RDS for SQL Server supports both SQL Authentication and Windows Authentication and access control for authenticated users should be configured using the principle of least privilege A master account is created automatically when an instance is launched This master user is granted several permissions For det ails see Master User Account Privileges This login is typically used for administrative purposes only and is granted the roles of processadmin setupa dmin SQLAgentUser Alter on SQLAgentOperator and public at the server level Amazon RDS manages the master user as a login and creates a user linked to the login in each customer database with the db_owner permission You can create additional users and databases after launch by connecting to the SQL Server instance using the tool of your choice (for example SQL Server Management Studio) These users should be assigned only the permissions needed for the workload or application that they are supporting t o operate correctly For example if you as the master user create a user X who then creates a database user X will be a member of the db_owner role for this new database not the master user Later on if you reset the master password the master user wi ll be added to db_owner for this new database ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 9 You can also integrate with your existing identity infrastructure based on Microsoft Active Directory and authenticate against Amazon RDS for SQL Server databases using the Windows Authentication method Using Windows Authentication allows you to keep a single set of credentials for all your users and save time and effort by not having to update these credentials in multiple places To use the Windows Authentication method with your Amazon RDS for SQL Server instance sign up for the AWS Directory Service for Microsoft Active Directory If you don’t already have a directory running you can create a new one You can then associate directories with both new and existing DB instances You can use Active Directory to manage users and groups with access privileges to your SQL Server DB instance and also join other EC2 instances to that domain You can also establish a one way forest trust from an external exi sting Active Directory deployment to the directory managed by AWS Directory Service Doing so will give you the ability to authenticate already existing Active Directory users and groups you have established in your organization with Amazon RDS SQL Server instances You can also create SQL Server Windows logins on domain joined DB instances for users and groups in your directory domain or the trusted domain if applicable Logins can be added using a SQL client tool such as SQL Server Management Stud io using the following command CREATE LOGIN [<user or group>] FROM WINDOWS WITH DEFAULT_DATABASE = [master] DEFAULT_LANGUAGE = [us_english]; More information on configuring Windows Authentication with Amazon RDS for SQL Server can be found in the Using Windows Authentication topic in the Amazon R DS User Guide Unsupported SQL Server Roles and Permissions in Amazon RDS The following server level roles are not currently available in Amazon RDS: bulkadmin dbcreator diskadmin securityadmin serveradmin and sysadmin See Features Not Supported and Features with limited support Also the following server level permissions are not available on a SQL Server DB instance: • ADMINISTER BULK OPERATIONS ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 10 • ALTER ANY CREDENTIAL • ALTER ANY EVENT NOTIFICATION • ALTER RESOURCES • ALTER SETTINGS (you can use the DB parameter group API actions to modify parameters) • AUTHENTICATE SERVER • CREATE DDL EVENT NOTIFICATION • CREATE ENDPOINT • CREATE TRACE EVENT NOTIFICATION • EXTERNAL ACCESS ASSEMBLY • SHUTDOWN (you can use the RDS reboot option instead) • UNSAFE ASSEMBLY • ALTER ANY AVAILABILITY GROUP • CREATE ANY AVAILABILITY GROUP Data at Rest Protection Amazon RDS for SQL Server supports the encryption o f DB instances with encryption keys managed in AWS KMS Data that is encrypted at rest includes the underlying storage for a DB instance its automated backups and snapshots You can also encrypt existing DB instances and share encrypted snapshots with ot her accounts within the same Region Amazon RDS encrypted instances use the open standard AES 256 encryption algorithm to encrypt your data on the server that hosts your Amazon RDS instance Once your data is encrypted Amazon RDS handles authentication of access and decryption of your data transparently with a minimal impact on performance You don’t need to modify your database client applications to use encryption Amazon RDS encrypted instances also help secure your data from unauthorized access to the underlying storage You can use Amazon RDS encryption to increase data protection of your applications deployed in the cloud and to fulfill compliance requirements for data at rest encryption To manage the keys used for encrypting and decrypting your Ama zon RDS resources use AWS KMS ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 11 Amazon RDS also supports encryption of data at rest using the Transparent Data Encryption ( TDE) feature of SQL Server This feature is only available in the Enterprise Edition You can enable TDE by setting up a custom optio n group with the TDE option enabled (if such a group doesn’t already exist) and then associating the DB instance with that group You can find more details on Amazon RDS support for TDE on the Options for the Microsoft SQL Server Database Engine topic in the Amazon RDS User Guide If full data encryption is not feasible or not desired for your workload you can selectively encrypt table data using SQL S erver column level encryption or by encrypting data in the application before it is saved to the DB instance Data in Transit Protection Amazon RDS for SQL Server fully supports encrypted connections to the instances using SSL SSL support is available in all AWS Regions for all supported SQL Server editions Amazon RDS creates an SSL certificate for your SQL Server DB instance when the instance is created The SSL certificate includes the DB instance endpoint as the Common Name (CN) for the SSL certificat e to help guard against spoofing attacks You can find more details on how to use SSL encryption in Using SSL with a Microsoft SQL Server DB Instance in the Amazon RDS User Guide Performance Management The performance of your SQL Server DB instance is determined primarily by your workload Depending on your workload you need to select the right instance type which affects the compute capacity amount of memory and network capacity available to your database Instance type is also determined by the storage size and type you select when you provision the database Instance Sizing The amount of memory and compute capa city available to your Amazon RDS for SQL Server instance is determined by its instance class Amazon RDS for SQL Server offers a range of DB instance classes from 1 vCPU and 1 GB of memory to 96 vCPUs and 488 GB of memory Not all instance classes are ava ilable for all SQL Server editions however The i nstance class availability also varies based on the version Amazon RDS for SQL Server supports the various DB instance classes for the various SQL Server editions For the most up todate list of supported instance classes see Amazon RDS for SQL Server instance types ArchivedAmazon Web Services Deployin g Microsoft SQL Server on Amazon Web Services Page 12 Previous generation DB instance classes are superseded in terms of both cost effectiveness and performance by the current generation classes For the previous generation instance types see Previous Generation Instances for more information Understanding the performance characteristics of your workload is impor tant when identifying the proper instance class If you are unsure how much CPU you need we recommend that you start with the smallest appropriate instance class then monitor CPU utilization using Amazon CloudWatch You can modify the instance class for an existing Amazon RDS for SQL Server instance allowing the flexibility to scale up or scale down the instance size depending on the performance characteristics required If you are in a Multi AZ High Availability configuration making the change involve s a server reboot or a failover To modify a SQL Server instance see Modifying a DB Instance Running the Microsoft SQL Server database engine and for the list of modification setting see setting for Microsoft SQL Server DB Instances The settings are similar to the ones you configure when launching a new DB instance By default changes (including a change to the DB instance class) are applied during the next specified maintenance window Alternatively you can use the apply immediately flag to apply t he changes immediately Disk I/O Management Amazon RDS for SQL Server simplifies the allocation and management of database storage for instances You decide the type and amount of storage to use and also the level of provisioned I/O performance if applica ble You can change the amount of storage or provisioned I/O on an RDS for SQL Server instance after the instance has been deployed You can also enable storage auto scaling to enable the Amazon RDS to automatically increase the storage when needed to avoi d having your instance run out of storage space We recommend that you enable storage auto scaling to handle growth from the onset Amazon RDS for SQL Server supports two types of storage each having different characteristics and recommended use cases: • General Purpose (SSD) (also called GP2) is an SSD backed storage solution with predictable performance and burst capabilities This option is suitable for workloads that run in larger batches such as nightly report processing Credits are replenished while the instance is largely idle and are then available for bursts of batch jobs ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 13 • Provisioned IOPS storage (or PIOPS storage) is designed to meet the needs of I/Ointensive workloads that are sensitive to storage performance and consistency in random access I/O throughput The following table compares the Amazon RDS storage performance characteristics Table 1: Amazon RDS storage performance characteristics Storage Type Min Volume Size Max Volume Size Baseline Performance Burst Capability Storage Technology Pricing Criteria General Purpose 20 GiB (100 GiB recommende d) 16 TiB* 3 IOPS/GiB Yes; up to 3000 IOPS per volume subject to accrued credits SSD Allocated storage Provisioned IOPS 20 GiB (for Enterprise and Standard editions 100 GiB for Web and Express Edition) 16 TiB* 10 IOPS/GiB up to max 64000 IOPS No; fixed allocation SSD Allocated storage and Provisioned IOPS * Maximum IOPS of 64000 is guaranteed only on Nitro based instances that are on m5 instance types Although performance characteristics of instances change over time as t echnology and capabilities improve there are several metrics that can be used to assess performance and help plan deployments Different workloads and query patterns affect these metrics in different ways making it difficult to establish a practical base line reference in a typical environment We recommend that you test your own workload to determine how these metrics behave in your specific use case For Amazon RDS we provision and measure I/O performance in units of input/output operations per second ( IOPS) We count each I/O operation per second that is 256 KiB or smaller as one IOPS ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Service s Page 14 The average queue depth a metric available through Amazon CloudWatch tracks the number of I/O requests in the queue that are waiting to be serviced These requests have been submitted by the application but haven’t been sent to the storage device because the device is busy servicing other I/O requests Time spent in the queue increases I/O latency and large queue sizes can indicate an overloaded system from a storage pe rspectiveAs a result depending on the storage configuration selected your overall storage subsystem throughput will be limited either by the maximum IOPS or the maximum channel bandwidth at any time If your workload is generating a lot of small sized I /O operations (for example 8 KiB) you are likely to reach maximum IOPS before the overall bandwidth reaches the channel maximum However if I/O operations are large in size (for example 256 KiB) you might reach the maximum channel bandwidth before max imum IOPS As specified in Microsoft documentation SQL Server stores data in 8 KiB pages but uses a complex set of techni ques to optimize I/O patterns with the general effect of reducing the number of I/O requests and increasing the I/O request size This approach results in better performance by reading and writing multiple pages at the same time Amazon RDS accommodates t hese multipage operations by counting every read or write operation on up to 32 pages as a single I/O operation to the storage system based on the variable size of IOPS SQL Server also attempts to optimize I/O by reading ahead and attempting to keep the queue length nonzero Therefore queue depth values that are very low or zero indicate that the storage subsystem is underutilized and potentially overprovisioned from a n I/O capacity perspective Using small storage sizes (less than 1TB) with General Pur pose (GP2) SSD storage can also have a detrimental impact on instance performance If your storage size needs are low you must ensure that the storage subsystem provides enough I/O performance to match your workload needs Because IOPS are allocated on a ratio of 3 IOPS for each 1 GB of allocated GP2 storage small storage sizes will also provide small amounts of baseline IOPS When created each instance comes with an initial allocation of I/O credits This allocation provides for burst capabilities of up to 3000 IOPS from the start Once the initial burst credits allocation is exhausted you must ensure that your ongoing workload needs fit within the baseline I/O performance of the storage size selected ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 15 High Availability Amazon RDS provides high availab ility and failover support for DB instances using Multi AZ deployments Multi AZ deployments provide increased availability data durability and fault tolerance for DB instances Multi AZ high availability option uses SQL Server database mirroring or Always On availability g roups configuration options with additional improvements to meet the requirements of enterprise grade production workloads running on SQL Server The Multi AZ deployment option provides enhanced availability and data durability b y automatically replicating database updates between two AWS Availability Zones Availability Zones are physically separate locations with independent infrastructure engineered to be insulated from failures in other Availability Zones When you set up SQL Server Multi AZ RDS automatically configures all databases on the instance to use database mirroring or availability groups Amazon RDS handles the primary the witness and the secondary DB instance for you Because configuration is automatic RDS selec ts database mirroring or Always On availability group s based on the version of SQL Server that you deploy Amazon RDS supports Multi AZ with database mirroring or availability group s for the following SQL Server versions and editions (exceptions noted) : See Multi AZ Deployments for Microsoft SQL Server for more information • SQL Server 2017: Enterprise Editions (Always On availability group s are supported in Ent erprise Edition 140030491 or later) • SQL Server 2016: Enterprise Editions (Always On availability group s are supported in En terprise Edition 130052160 or later) Amazon RDS supports Multi AZ with database mirroring for the following SQL Server versions and editions except for the versions of Enterprise Edition noted previously: • SQL Server 2017: Standard and Enterprise Editions • SQL Server 2016: Standard and Enterprise Editions • SQL Server 2014: Standard and Enterprise Editions • SQL Server 2012: Standard and Enterprise Editions Amazon RDS supports Multi AZ for SQL Server in all AWS Regions with the following exceptions: ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 16 • US West (N California): Neither database mirroring nor Always On availability group s are supported • South America (São Paulo): Supported on all DB instance classes except m1 or m2 • EU (Stockholm): Neither database mirroring nor Always On availability group s are supported When you create or modify your SQL Server DB instance to run using Multi AZ Amazon RDS will automatically provision a primary database in one Availability Zone and maintain a synchronous secondary replica in a different Avail ability Zone In the event of planned database maintenance or unplanned service disruption Amazon RDS will automatically fail over the SQL Server database s to the up todate secondary so that database operations can resume quickly without any manual inter vention If an Availability Zone failure or instance failure occurs your availability impact is limited to the time that automatic failover takes to complete typically 60 120 seconds for database mirroring and 10 15 seconds for availability groups When failing over Amazon RDS simply flips the canonical name record (CNAME) for your DB instance to point to the secondary which is in turn promoted to become the new primary The canonical name record (or endpoint name) is an entry in DNS We recommend that you implement retry logic for database connection errors in your application layer by using the canonical name rather than attempt to connect directly to the IP address of the DB instance We recommend this approach because during a failover the underlyin g IP address will change to reflect the new primary DB instance Amazon RDS automatically performs a failover in the event of any of the following: • Loss of availability in the primary Availability Zone • Loss of network connectivity to the primary DB node • Compute unit failure on the primary DB node • Storage failure on the primary DB node Amazon RDS Multi AZ deployments don’t fail over automatically in response to database operations such as long running queries deadlocks or database corruption errors For ex ample suppose that a customer workload causes high resource usage on an instance and that SQL Server times out and triggers failover of individual databases In this case RDS recovers the failed databases back to the primary instance ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 17 When operations suc h as instance scaling or system upgrades like OS patching are initiated for Multi AZ deployments they are applied first on the secondary instance prior to the automatic failover of the primary instance for enhanced availability Due to failover optimiza tion of SQL Server certain workloads can generate greater I/O load on the mirror than on the principal particularly for DBM deployments This functionality can result in higher IOPS on the secondary instance We recommend that you consider the maximum IO PS needs of both the primary and secondary when provisioning the storage type and IOPS of your RDS for SQL Server instance Monitoring and Management Amazon CloudWatch collects many Amazon RDS specific metrics You can look at these metrics using the AWS Management Console the AWS CLI (using the monget stats command) or the AWS API Or the powershell (using the Get CWMetricStatistics cmdlet ) In addition to the system level metrics collected for Amazon EC2 instances (such as CPU usage disk I/O and network I/O) the Amazon RDS metrics include many database specific metrics such as database connections free storage space read and write I/O per second read and write latency read and write throughput and available RAM For a full up todate list see Amazon RDS Dimensions and Metrics in the Amazon CloudWatch Developer Guide In Amazon CloudWatch you can also configure alarms on these metrics to trigger notifications when the state changes An alarm watches a single metric over a time period you specify and performs one or more actions based on the value of the metric relative to a given threshold over a number of time periods Notifications are sent to Amazon Simple Notification Service (Amazon SNS) topics or AWS Auto Scaling policies You can configure these alarms to notify database administrators by email or SMS text message when they get triggered You can also use notifications as triggers for custom automated response mechanisms or workflows that react to alarm events; however you need to implement such event handlers separately Amazon RDS for SQL Server als o supports Enhanced Monitoring Amazon RDS provides metrics in nearreal time for the operating system (OS) that your DB instance runs on You can view the metrics for your instance using the console or consume the Enhanced Monitoring JSON output from Ama zon CloudWatch Logs in a monitoring system of your choice Enhanced Monitoring gathers its metrics from an agent on the instance ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 18 Enhanced Monitoring gives you deeper visibility into the health of your Amazon RDS instances in nearreal time providing a co mprehensive set of 26 new system metrics and aggregated process information at a detail level of up to 1 second These monitoring metrics cover a wide range of instance aspects such as the following: • General metrics like uptime instance and engine version • CPU utilization such as idle kernel or user time percentage • Disk subsystem metrics including utilization read and write bytes and number of I/O operations • Network metrics like interface throughput and read and write bytes • Memory utili zation and availability including physical kernel commit charge system cache and SQL Server footprint • System metrics consisting of number of handles processes and threads • Process list information grouped by OS processes RDS processes (management monitoring diagnostics agents) and RDS child processes (SQL Server workloads) Because Enhanced Monitoring delivers metrics to CloudWatch Logs this feature incur s standard CloudWatch Logs charges These charges depend on a number of factors: • The number of DB instances sending metrics to CloudWatch Logs • The level of detail of metrics sampling —finer detail results in more metrics being delivered to CloudWatch Logs • The workload running on the DB instance —more compute intensive workloads have more OS process activity to report More information and instructions on how to enable the feature can be found in Viewing DB Instance Metrics in the Amazon RDS User Guide In ad dition to CloudWatch metrics you can use the Performance Insights and native SQL Server performance monitoring tools such as dynamic management views the SQL Server error log and both client and server side SQL Server Profiler traces Performance Insights expands on existing Amazon RDS monitoring features to illustrate your database's performance and help you analyze any issues that affect it With the Performance Insights dashboard you can visualize the database load and filter the lo ad by waits SQL statements hosts or users More information can be found at Using ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 19 Using Amazon RDS Performance Insights in the Amazon Relational Database Service User Guide Amazon RDS for SQL Server provides two administrative windows of time designed for effective management described following The service will assign default time windows to each DB instance if these aren’t customized • Backup window: The back up window is the period of time during which your instance is going to be backed up Because backups might have a small performance impact on the operation of the instance we recommend you set the window for a time when this has minimal impact on your wor kload • Maintenance window: The maintenance window is the period of time during which instance modifications (such as implementing pending changes to storage or CPU class for the instance) and software patching occur Your instance might be restarted during this window if there is a scheduled activity pending and that activity requires a restart but that is not always the case We recommend scheduling the maintenance window for a time when your instance has the least traffic or a potential restart is leas t disruptive Amazon RDS for SQL Server comes with several built in management features: • Automated backup and recovery Amazon RDS automatically backs up all databases of your instances You can set the backup retention period when you create a n instance If you don't set the backup retention period Amazon RDS uses a default retention period of one day You can modify the backup retention period; valid values are 0 (for no backup retention) to a maximum of 35 days Automated backups occur daily during the backup window If you select zero days of backup retention point in time log backups are not taken Amazon RDS uses these periodic data backups in conjunction with your transaction logs (backed up every 5 minutes) to enable you to restore your DB instance to any second during your retention period up to the LatestRestorableTime typically up to the last 5 minutes • Push button scaling With a few clicks you can change the instance class to increase or decrease the size of your instance’s compute capacity network capacity and memory You can choose to make the change immediately or schedule it for your next maintenance window • Automatic host replacement Amazon RDS automatically replaces the compute instance powering your deployment in the event of a har dware failure ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 20 • Automatic minor version upgrade Amazon RDS keeps your database software up todate You have full control on whether Amazon RDS deploy s such patching automatically and you can disable this option to prevent that Regardless of this setting publicly accessible instances with open security groups might be force patched when security patches are made available by vendors to ensure the safety and integrity of customer resources and our infrastructure The patching activity occurs during the w eekly 30 minute maintenance window that you specify when you provision your database (and that you can alter at any time) Such patching occurs infrequently and your database might become unavailable during part of your maintenance window when a patch is applied You can minimize the downtime associated with automatic patching if you run in Multi AZ mode In this case the maintenance is generally performed on the secondary instance When it is complete the secondary instance is promoted to primary The maintenance is then performed on the old primary which becomes the secondary • Preconfigured parameters and options Amazon RDS provides a default set of DB parameter groups and also option groups for each SQL Server edition and version These groups contain configuration parameters and options respectively which allow you to tune the performance and features of your instance By default Amazon RDS provides an optimal configuration set suitable for most workloads based on the class of the in stance that you selected You can create your own parameter and option groups to further tune the performance and features of your instance You can administer Amazon RDS for SQL Server databases using the same tools you use with on premises SQL Server ins tances such as SQL Server Management Studio However to provide you with a more secure and stable managed database experience Amazon RDS doesn’t provide desktop or administrator access to instances and it restricts access to certain system procedures a nd tables that require advanced privileges such as those granted to sa Commands to create users rename users grant revoke permissions and set passwords work as they do in Amazon EC2 (or on premises) databases The administrative commands that RDS doesn’t support are listed in Unsupported SQL Server Roles and Permissions in Amazon RDS Even though direct file system level access to the RDS SQL Server instance is no t available you can always migrate your data out of RDS instances You can use t ools like the Microsoft SQL Server Database Publishing Wizard to download the contents of ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 21 your databases into flat T SQL files You can then load these files into any other SQ L Server instances or store them as backups in Amazon Simple Storage Service (Amazon S3) or Amazon S3 Glacier or on premises In addition you can use the AWS Database Migration Service to move data to and from Amazon RDS You can also use native backup and restore through S3 You can use native backups to migrate databases to Amazon RDS for SQL Server instances or back up your RDS for SQL Server instances to S3 to copy to another SQL Server instance or to retai n offline For more details on how this works and the permissions required see Importing and Exporting SQL Server Databases Managing Cost Managing the cost of the IT infrastructure is often an important driver for cloud adoption AWS makes running SQL Server on Amazon a cost effective proposition by providing a flexible scalable environment and pricing models that allow you to pay for only the capac ity you consume at any given time Amazon RDS further reduces your costs by reducing the management and administration tasks that you have to perform Generally the cost of operating an Amazon RDS instance depends on the following factors: • The AWS Region the instance is deployed in • The instance class and storage type selected for the instance • The Multi AZ mode of the instance • The pricing model • How long the instance is running during a given billing period You can optimize the operating costs of your RDS wo rkloads by controlling the factors listed above AWS services are available in multiple Regions across the world In Regions where our costs of operating our services are lower we pass the savings on to you Thus Amazon RDS hourly prices for the different instance classes vary by the Region If you have the flexibility to deploy your SQL Server workloads in multiple Regions the potential savings from operating in one Region as compared to another can be an important factor in choosing the right Region Amazon RDS also offers different pricing models to match different customer needs: ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 22 • OnDemand Instance pricing allows you to pay for Amazon RDS DB in stances by the hour with no term commitments You incur a charge for each hour a given DB instance is running If your workload doesn’t need to run 24/7 or you are deploying temporary databases for staging testing or development purposes OnDemand Instance p ricing can offer significant advantages • Reserved Instances (RI) allow you to lower costs and reserve capacity Reserved Instances can save you up to 60 percent over On Demand rates when used in steady state which tend to be the case for many datab ases They can be purchased for 1 or 3year terms If your SQL Server database is going to be running more than 25 percent of the time each month you will most likely financially benefit from using a Reserved Instance Overall savings are greater when co mmitting to a 3 year term compared to running the same workload using OnDemand Instance pricing for the same period of time However the length of the term needs to be balanced against projections of growth because the commitment is for a specific inst ance class If you expect that your compute and memory needs are going to grow over time for a given DB instance you might want to opt for a shorter 1 year term and weigh the savings from the Reserved Instance against the overhead of being over provisione d for some part of that term The following pricing options are available for RDS Reserved Instances : • With All Upfront Reserved Instances you pay for the entire Reserved Instance with one upfront payment This option provides you with the largest discount compared to On Demand Instance pricing • With Partial Upfront Reserved Instances you make a low upfront payment and are then charged a discounted hourly rate for the instance for the duration of the Reserved Instance term • With No Upfront Reserved Instanc es you don’t make any upfront payments but are charged a discounted hourly rate for the instance for the duration of the Reserved Instance term This option still provides you with a significant discount compared to On Demand Instance pricing but the di scount is usually less than for the other two Reserved Instance pricing options Note that like in Amazon EC2 in Amazon RDS you can issue a stop command to a standalone DB instance and keep the instance in a stopped state to avoid incurring compute charge s You can't stop an Amazon RDS for SQL Server DB instance in a Multi AZ configuration instead you can terminate the instance take a final snapshot prior to termination and recreate a new Amazon RDS instance from the snapshot when ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 23 you need it or remov e the Multi AZ configuration first and then stop the instance Note that after 7 days your stopped instance will re start so that any pending maintenance can be applied Additionally you can use several other strategies to help optimize costs: • Terminate DB instances with a last snapshot when they are not needed then reprovision them from that snapshot when they need to be used again For example some development and test databases can be terminated at night and on weekends and reprovisioned on weekdays in the morning Alternatively use the stop feature mentioned above to turn off the database for the weekend • Scale down the size of your DB instance during off peak times by using a smaller instance class See the Amazon RDS for SQL Server Pricing webpage for up todate pricing information for all pricing models and instance classes Microsoft SQL Server on Amazon EC2 You can also choose to run a Microsoft SQL Server on Amazon EC2 as described in the following sections Starting a SQL Server Instance on Amazon EC2 You can start a SQL Server DB instance on Amazon EC2 in several ways : • Interactively using the AWS Manageme nt Console • Programmatically using AWS CloudFormation templates • Using AWS SDKs and the AWS Command Line Interface (AWS CLI) • Using the PowerShell For the procedure to launch Amazon EC2 using the AWS Management Consol e see Launch an Instan ce Check the below useful bullets for launching Amazon EC2 for running SQL Server instance ArchivedAmazon Web Services Deploying Micros oft SQL Server on Amazon Web Services Page 24 • You can deploy a SQL Server instance on Amazon EC2 using an Amazon Machine Image (AMI) An AMI is simply a packaged environment that includes all the necessary software to set up and boot your instance Some AMIs have just the operating system (for example Windows Server 2019 ) and others have the operating system and a version and edition of SQL Server (Windows Server 2019 and SQL Server 201 7 Standard Edition SQL Server 2017 on Ubuntu and so on) We recommend that you use the AMIs available at Windows A MIs These are available in all AWS Regions Some AMIs include an installation of a specific version and edition of SQL Server When running an Amazon EC2 instance based on one of these AMIs the SQL Server licensing costs are included in the hourly pri ce to run the Amazon EC2 instance • Other AMIs install just the Microsoft Windows operating system This type of AMI allows you the flexibility to perform a separate custom installation of SQL Server on the Amazon EC2 instance and bring your own license (B YOL) of Microsoft SQL Server if you have qualifying licenses For additional information on BYOL qualification criteria see License Mobility • Consider all five performance charact eristics (vCPU Memory Instance Storage Network Bandwidth and EBS Bandwidth) of Amazon EC2 instances when selecting the EC2 instance See Amazon EC2 Instance Types for more information • Depending on the type of SQL Server deployment for example stand alone Windows Failover Clustering and Always On Availability Groups SQL Server on Linux and so on you might decide to assign one or multiple static IP addresses to your Amazon EC2 instan ce You can do this assignment in the Network interface section of Configure Instance Details • Add the appropriate storage volumes depending on your workload needs For more details on select the appropriate volume types see the Disk I/O Management section • Assign the appropriate tags to the Amazon EC2 instance We recommend that you assign tags to other Amazon resources for example Amazon Elastic Block Store (Amazon EBS) volumes to allow for more control over resou rcelevel permissions and cost allocation For best practices on tagging AWS resources see Tagging Your Amazon EC2 Resources in the Amazon EC2 User Guide ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 25 Amazon EC2 Security When you run SQL Server on Amazon EC2 instances you have the responsibility to effectively protect network access to your instances with security groups adequate operating system settings and best practices such as limiting access to open port s and using strong passwords In addition you can also configure a hostbased firewall or an intrusion detection and prevention system (IDS/IPS) on your instances As with Amazon RDS in EC2 security controls start at the network layer with the network d esign itself in EC2 VPC along with subnets security groups and network access control lists as applicable For a more detailed discussion of these features review the preceding Amazon RDS Security section Using AWS Identity and Access Management (IAM) you can control access to your Amazon EC2 resources and authorize (or deny) users the ability to manage your instances running the SQL Server database and the corresponding EBS volumes For example you can r estrict the ability to start or stop your Amazon EC2 instances to a subset of your administrators You can also assign Amazon EC2 roles to your instances giving them privileges to access other AWS resources that you control For more information on how to use IAM to manage administrative access to your instances see Controlling Access to Amazon EC2 Resources in the Amazon EC2 User Guide In an Amazon EC2 deployment of SQL Server you are also responsible for patching the OS and application stack of your instances when Microsoft or other third party vendors release new security or functional patches This patching includes work for additional support services and instances such as Active Directory servers You can encrypt the EBS data volumes of your SQL Server instances in Amazon EC2 This option is available to all editions of SQL Server de ployed on Amazon EC2 and is not limited to the Enterprise Edition unlike transparent data encryption ( TDE) When you create an encrypted EBS volume and attach it to a supported instance type data stored at rest on the volume disk I/O and snapshots crea ted from the volume are all encrypted The encryption occurs on the servers that host Amazon EC2 instances transparently to your instance providing encryption of data in transit from EC2 instances to EBS storage as well Note that encryption of boot volu mes is not supported yet Your data and associated keys are encrypted using the open standard AES 256 algorithm EBS volume encryption integrates with the AWS KMS This integration allows you to use your own customer master key (CMK) for volume encryption Creating and ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 26 leveraging your own CMK gives you more flexibility including the ability to create rotate disabl e and define access controls and to audit the encryption keys used to protect your data Performance Management The performance of a relational DB instance on AWS depends on many factors including the Amazon EC2 instance type the configuration of the d atabase software the application workload and the storage configuration The following sections describe various options that are available to you to tune the performance of the AWS infrastructure on which your SQL Server instance is running Instance Si zing AWS has many different Amazon EC2 instance types available so you can choose the instance type that best fits your needs These instance types vary in size ranging from the smallest instance the t2micro with 1 vCPU 1 GB of memory and EBS only storage to the largest instance the d28xlarge with 36 vCPUs 244 GB of memory 48 TB of local storage and 10 gigabit network performance We recommend that you choose Amazon EC2 instances that best fit your workload requirements and have a good balance o f CPU memory and IO performance SQL Server workloads are typically memory bound so look at the r 5 or r5d instances also referred to as memory optimized instances If your workload is more CPU bound look at the latest compute optimized instances of th e c5 instance family See Amazon EC2 Instance types for more information You can customize the number of CPU cores for the instance You might do this to potentially optimize the licensing costs of your software with an instance that has sufficient amounts of RAM for memory intensive workloads but fewer CPU cores See Optimizing CPU Options for more inform ation One of the differentiators among all these instance types is that the m 5 r5 and c5 instance types are EBS optimized by default whereas older instance types such as the r3 family can be optionally EBS optimized You can find a detailed explanation of EBS optimized instances in the Disk I/O Management section following If your workload is network bound again look at instance families that sup port 25 gigabit network performance because these instance families also support Enhanced Networking These include the r5 z1d m5 and c5 instance families The i3en and c5n instance types even support 100 gigabit network performance Enhanced Networki ng enables you to get significantly higher packet per second (PPS) performance lower network jitter and lower latencies by using single root I/O ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 27 virtualization (SR IOV) This feature uses a new network virtualization stack that provides higher I/O perfor mance and lower CPU utilization compared to traditional implementations See Enhanced Networking on Windows in the Amazon EC2 User Guide Disk I/O Management The same storage types available for Amazon RDS are also available when deploying SQL Server on Amazon EC2 Additionally you also have access to instance storage Because you have fine grained control over the storage volumes and strategy to use you can deploy workloads that require more than 4 TiB in size or 64000 IOPS in Amazon EC2 Multiple EBS volumes or instance storage disks can even be striped together in a software RAID configuration to aggregate both the storage size and usable IOPS beyond the capabilities of a single volume The two main Amazon EC2 storage options are as follows: • Instance store volumes: Several Amazon EC2 instance types come with a certain amount of local (directly attached) storage which is ephemeral These include R5d M5d i3 i3en and x1e instance types • Any data saved on instance storage is no longer available after you stop and restart that instance or if the underlying hardware fails which causes an instance restart to happen on a different host server This character istic makes instance storage a challenging option for database persistent storage However Amazon EC2 instances can have the following benefits: o Instance store volumes offer good performance for sequential disk access and don’t have a negative impact on your network connectivity Some customers have found it useful to use these disks to store temporary files to conserve network bandwidth o Instance types with large amounts of instance storage offer unmatched I/O performance and are recommended for database workloads as long as you implement a backup or replication strategy that addresses the ephemeral nature of this storage ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 28 • EBS volumes: Similar to Amazon RDS you can use EBS for persistent block level storage volumes Amazon EBS volumes are off instance s torage that persist s independently from the life of an instance Amazon EBS volume data is mirrored across multiple servers in an Availability Zone to prevent the loss of data from the failure of any single component You can back them up to Amazon S3 by u sing snapshots These attributes make EBS volumes suitable for data files log files and the flash recovery area Although the maximum size of an EBS volume is 16 TB you can address larger database sizes by striping your data across multiple volumes See EBS volume characteristics for more information EBSoptimized instances enable Amazon EC2 instances to fully utilize the Provisioned IOPS on an EBS volume These instances deliver dedicated throughput between Amazon EC2 and Amazon EBS depending on the instance type When attached to EBSoptimized instan ces Provisioned IOPS volumes are designed to deliver within 10 percent of their provisioned performance 999 percent of the time The combination of EBSoptimized instances and Provisioned IOPS volumes helps to ensure that instances are capable of consist ent and high EBS I/O performance See EBS optimized by default for more information Most databases with high I/O requirements should benefit from this featu re You can also use EBS optimized instances with standard EBS volumes if you need predictable bandwidth between your instances and EBS For up todate information about the availability of EBS optimized instances see Amazon EC2 Instance Types To scale up random I/O performance you can increase the number of EBS volumes your data resides on for example by using eight 100 GB EBS volumes instead of one 800 GB EBS volume However remember that us ing striping generally reduces the operational durability of the logical volume by a degree inversely proportional to the number of EBS volumes in the stripe set The more volumes you include in a stripe the larger the pool of data that can get corrupted if a single volume fails because the data on all other volumes of the stripe gets invalidated also EBS volume data is natively replicated so using RAID 0 (striping) might provide you with sufficient redundancy and availability No other RAID mechanism i s supported for EBS volumes Data logs and temporary files benefit from being stored on independent EBS volumes or volume aggregates because they present different I/O patterns To take advantage of additional EBS volumes be sure to evaluate the networ k load to help ensure that your instance size is sufficient to provide the network bandwidth required ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 29 I3 and i3en instances with instance storage are optimized to deliver tens of thousands of low latency random I/O operations per second (IOPS) to applica tions from direct attached SSD drives These instances provide an alternative to EBS volumes for the most I/O demanding workloads Amazon EC2 offers many options to optimize and tune your I/O subsystem We encourage you to benchmark your application on se veral instance types and storage configurations to select the most appropriate configuration For EBS volumes we recommend that you monitor the CloudWatch average queue length metric of a given volume and target an average queue length of 1 for every 500 IOPS for volumes up to 2000 IOPS and a length between 4 and 8 for volumes with 2 000 to 4 000 IOPS Lower metrics indicate overprovisioning and higher numbers usually indicate your storage system is overloaded High Availability High availability is a d esign and configuration principle to help protect services or applications from single points of failure The goal is for services and applications to continue to function even if underlying physical hardware fails or is removed or replaced We will review three native SQL Server features that improve database high availability and ways to deploy these features on AWS Log Shipping Log shipping provides a mechanism to automatically send transaction log backups from a primary database on one DB instance to one or more secondary databases on separate DB instances Although log shipping is typically considered a disaster recovery feature it can also provide high availability by allowing secondary DB instances to be promoted as the primary in the e vent of a failure of the primary DB instance Log shipping offers you many benefits to increase the availability of log shipped databases Besides the benefits of disaster recovery and high availability already mentioned log shipping also provides access to secondary databases to use as read only copies of the database This feature is available between restore jobs It can also allow you to configure a lag delay or a longer delay time which can allow you to recover accidentally changed data on the prima ry database before these changes are shipped to the secondary database We recommend running the primary and secondary DB instances in separate Availability Zones and optionally deploying an optional monitor instance to track all the ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 30 details of log shippi ng Backup copy restore and failure events for a log shipping group are available from the monitor instance Database Mirroring Database mirroring is a feature that provides a complete or almost complete mirror of a database depending on the operating mode on a separate DB instance Database mirroring is the technology used by Amazon RDS to provide Multi AZ support for Amazon RDS for SQL Server This feature increases the availability and protection of mirrored databases and provides a mechanism to ke ep mirrored databases available during upgrades In database mirroring SQL Servers can take one of three roles: the principal server which hosts the read/write principal version of the database; the mirror server which hosts the mirror copy of the princ ipal database; and an optional witness server The witness server is only available in high safety mode and monitors the state of the database mirror and automates the failover from the primary database to the mirror database A mirroring session is establ ished between the principal and mirror servers which act as partners They perform complementary roles as one partner assumes the principal role while the other partner assumes the mirror role Mirroring performs all inserts updates and deletes that ar e executed against the principal database on the mirror database Database mirroring can either be a synchronous or asynchronous operation These operations are performed in the two mirroring operating modes: • Highsafety mode uses synchronous operation In this mode the database mirror session synchronizes the inserts updates and deletes from the principal database to the mirror database as quickly as possible using a synchronous operation As soon as the database is synchronized the transaction is comm itted on both partners This mode has increased transaction latency as each transaction needs to be committed on both the principal and mirror databases Because of this high latency we recommend that partners be in the same or different Availability Zone s hosted within the same AWS Region when you use this operating mode ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon W eb Services Page 31 • Highperformance mode uses asynchronous operation Using this mode the database mirror session synchronizes the inserts updates and deletes from the principal database to the mirror d atabase using an asynchronous process Unlike a synchronous operation this mode can result in a lag between the time the principal database commits the transactions and the time the mirror database commits the transactions This mode has minimum transacti on latency and is recommended when partners are in different AWS Regions SQL Server Always On Availability Groups Always On availability groups is an enterprise level feature that provides high availability and disaster recovery to SQL Server databases Always On availability groups uses advanced features of Windows Failover Cluster and the Enterprise Edition of all versions of SQL Server from SQL Server 2012 Starting in SQL Server 2016 SP1 basic availability groups are available for Standard Edition SQL Server as well (as a replacement for database mirroring) These availability groups support the failover of a set of user databases as one distinct unit or group User databases defined within an availability group consist of primary read/writ e databases along with multiple sets of related secondary databases These secondary databases can be made available to the application tier as read only copies of the primary databases thus providing a scale out architecture for read workloads You can a lso use the secondary databases for backup operations You can implement SQL Server Always On availability groups on Amazon Web Services using services like Windows Server Failover Clustering (WSFC) Amazon EC2 Amazon VPC Active Directory and DNS Alway s On cluster s require multiple subnets and need the MultiSubnetFailover=True parameter in the connection string to work correctly See How do I create a SQL Server Always On availability group cluster in the AWS Cloud? for how to deploy SQL Server Always On availability Groups For details on how to deploy SQL Server Always On availability groups in AWS using CloudFormation see the SQL Server on the AWS Cloud: Quick Start Reference Deployment ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 32 Figure 2: SQL Server Always On availability group Monitoring and Management Amazon CloudWatch is an AWS instance monitoring service that provides detailed CPU disk and network utilization metrics for each Amazon EC2 instance and EBS volume Using these metrics you can perform detailed reporting and management This data is available in the AWS Management Console and also the API Using the API allows for infrastructure automation and orchestration based on load metrics Additionally Amazon CloudWatch supports custom metrics such as memory utilization or disk utilizations which are metrics visible only f rom within the instance You can publish your own relevant metrics to the service to consolidate monitoring information You can also push custom logs to CloudWatch Logs to monitor store and access your log files for Amazon EC2 SQL Server instances You can then retrieve the associated log data from CloudWatch Logs using the Amazon CloudWatch console the CloudWatch Logs commands in the AWS CLI or the ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 33 CloudWatch Logs SDK This approach allows you to track log events in real time for your SQL Server inst ances As with Amazon RDS you can configure alarms on Amazon EC2 Amazon EBS and custom metrics to trigger notifications when the state changes An alarm tracks a single metric over a time period you specify and performs one or more actions based on the value of the metric relative to a given threshold over a number of time periods Notifications are sent to Amazon SNS topics or AWS Auto Scaling policies You can configure these alarms to notify database administrators by email or SMS text message when they get triggered In addition you can use Microsoft and any third party monitoring tools that have built in SQL Server monitoring capabilities Amazon EC2 SQL Server monitoring can be integrated with System Center Operations Manager (SCOM) Open source monitoring frameworks such as Nagios can also be run on Amazon EC2 to monitor your whole AWS environment including your SQL Server databases The management of a SQL Server database on Amazon EC2 is similar to the management of an on premises database You can use SQL Server Management Studio SQL Server Configuration Manager SQL Server Profiler and other Microsoft and third party tools to perform administration or tuning tasks AWS also offers the AWS Add ins for Microsoft System Center to extend the functionality of your existing Microsoft System Center implementation to monitor and control AWS resources from the same interface as your on premises resources These addins are currently av ailable at no additional cost for SCOM versions 2007 and 2012 and System Center Virtual Machine Manager (SCVMM) Although you can use Amazon EBS snapshots as a mechanism to back up and restore EBS volumes the service does not integrate with the Volume Shadow Copy Service (VSS) You can take a snapshot of an attached volume that is in use However VSS integration is required to ensure that the disk I/O of SQL Server is temporarily paused during the snapshot process Any data that has not been per sisted to disk by SQL Server or the operating system at the time of the EBS snapshot is excluded from the snapshot Lacking coordination with VSS there is a risk that the snapshot will not be consistent and the database files can potentially get corrupte d For this reason we recommend using third party backup solutions that are designed for SQL Server workloads ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 34 Managing Cost AWS elastic and scalable infrastructure and services make running SQL Server on Amazon a cost effective proposition by tracking d emand more closely and reducing overprovisioning As with Amazon RDS the costs of running SQL Server on Amazon EC2 depend on several factors Because you have more control over your infrastructure and resources when deploying SQL Server on Amazon EC2 the re are a few additional dimensions to optimize cost on compared to Amazon RDS: • The AWS Region the instance is deployed in • Instance type and EBS optimization • The type of instance tenancy selected • The high availability solution selected • The storage type and size selected for the EC2 instance • The Multi AZ mode of the instance • The pricing model • How long it is running during a given billing period • Underlying Operating system (Windows or Linux) As with Amazon RDS Amazon EC2 hourly instance costs vary by the Region If you have flexibility about where you can deploy your workloads geographically we recommend deploying your workload in the Region with the cheapest EC2 costs for your particular us e case Different instance types have different hourly charges Generally current generation instance types have lower hourly charges compared to previous generation instance types along with better performance due to newer hardware architectures We recommend that you test your workloads on new instance types as these become available and plan to migrate your workloads to new instance types if the c ost vs performance ratio makes sense for your use case Many EC2 instance types are available with the E BSoptimized option This option is available for an additional hourly surcharge and provides additional dedicated networking capacity for EBS I/O This dedicated capacity ensures a predictable amount of networking capacity to sustain predictable EBS I/O Some current generation ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 35 instance types such as the C4 M4 and D2 instance types are EBS optimized by default and don’t have an additional surcharge for the optimization Dedicated Instances are Amazon EC2 instances that run in a VPC on hardware that’s dedicated to a single customer Your Dedicated Instances are physically isolated at the host hardware level from your instances that aren’t Dedicated Instances and from instances that belong to other AWS accounts We recommend deploying EC2 SQL Server inst ances in dedicated tenancy if you have certain regulatory needs Dedicated tenancy has a per region surcharge for each hour a customer runs at least one instance in dedicated tenancy The hourly cost for instance types operating in dedicated tenancy is different for standard tenancy Uptodate pricing information is available on the Amazon EC2 Dedicated Instances pricing page You also have the option to provision EC2 Dedicated Hosts These are physical servers with E C2 instance capacity fully dedicated to your use Dedicated Hosts can help you address compliance requirements and reduce cos ts by allowing you to use your existing server bound software licenses For m ore information see Amazon EC2 Dedicated Hosts and Bring license to AWS Amazon EC2 Reserved Instances allow you to lower costs and reserve capacity Reserved Instances can save you up to 70 percent over On Demand rates when used in steady state They can be purchased for one or three year terms If your SQL Server database is going to be running more than 60 percent of the time you will most likely financially benefit from using a Reserved Instance Unlike with On Demand pricing the capacity reservation is made for the entire duration of the term wheth er a specific instance is using the reserved capacity or not The following pricing options are available for EC2 Reserved Instances : • All Upfront Reserved Instances : you pay for the entire Reserved Instance with one upfront payment This option provides yo u with the largest discount compared to On Demand Instance pricing • Partial Upfront Reserved Instances : you make a low upfront payment and are then charged a discounted hourly rate for the instance for the duration of the Reserved Instance term ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 36 • No Upfront Reserved Instances : you don’t make any upfront payments but will be charged a discounted hourly rate for the instance for the duration of the Reserved Instance term This option still provides you with a significant discount compared to On Demand Instanc e pricing but the discount is usually less than for the other two Reserved Instance pricing options Additionally the following options can be combined to reduce your cost of operating SQL Server on EC2: • Use the Windows Server with SQL Server AMIs where licensing is included The cost of the SQL Server license is included in the hourly cost of the instance You are only paying for the SQL Server license when the instance is running This approach is especially effective for databases that are not running 24/7 and for short projects • Shut down DB instances when they are not needed For example some development and test databases can be shut down at night and on weekends and restarted on weekdays in the morning • Scale down the size of your databases during off peak times • Use the Optimizing CPU options Caching Whether using SQL Server on Amazon EC2 or Amazon RDS SQL Server users confronted with heavy workloads should look into reducing this load by caching data so that the web and application servers don’t have to repeatedly access the database for common or re peat datasets Deploying a caching layer between the business logic layer and the database is a common architectural design pattern to reduce the amount of read traffic and connections to the database itself The effectiveness of the cache depends largely on the following aspects: • Generally the more read heavy the query patterns of the application are on the database the more effective caching can be • Commonly the more repetitive query patterns are with queries returning infrequently changing datasets the more you can benefit from caching Leveraging caching usually requires changes to applications The logic of checking populating and updating a cache is normally implemented in the application data and database abstraction layer or Object Relationa l Mapper (ORM) ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 37 Several tools can address your caching needs You have the option to use a managed service similar to Amazon RDS but for caching engines You can also choose from different caching engines that have slightly different feature sets: • Amazon ElastiCache: In a similar fashion to Amazon RDS ElastiCache allows you to provision fully managed caching clusters supporting both Memcached and Redis ElastiCache simplifies and offloads the management monitoring and operation of a Memcached or Redis e nvironment enabling you to focus on the differentiating parts of your applications • Memcached: An open source high performance distributed in memory object caching system Memcached is an in memory object store for small chunks of arbitrary data (string s objects) such as results of database calls Memcached is widely adopted and mostly used to speed up dynamic web applications by alleviating database load • Redis: An open source high performance in memory key value NoSQL data engine Redis stores stru ctured key value data and provides rich query capabilities over your data The contents of the data store can also be persisted to disk Redis is widely adopted to speed up a variety of analytics workloads by storing and querying more complex or aggregate datasets in memory relieving some of the load off backend SQL databases Hybrid Scenarios and Data Migration Some AWS customers already have SQL Server running in their on premises or colocated data center but want to use the AWS Cloud to enhance their arc hitecture to provide a more highly available solution or one that offers disaster recovery Other customers are looking to migrate workloads to AWS without incurring significant downtime These efforts often can stretch over a significant amount of time A WS offers several services and tools to assist customers in these use cases and SQL Server has several replication technologies that offer high availability and disaster recovery solutions These features differ depending on the SQL Server version and edi tion Amazon RDS on VMware lets you deploy managed databases in on premises VMware environments using the Amazon RDS technology enjoyed by hundreds of thousands of AWS customers Amazon RDS provides cost efficient and resizable capacity while automating ti meconsuming administration tasks including hardware provisioning database setup patching and backups freeing you to focus on your applications RDS on VMware brings these same benefits to your on premises deployments making it ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 38 easy to set up operate and scale databases in VMware vSphere private data centers or to migrate them to AWS RDS on VMware allows you to utilize the same simple interface for managing databases in on premises VMware environments as you would use in AWS You can easily replica te RDS on VMware databases to RDS instances in AWS enabling low cost hybrid deployments for disaster recovery read replica bursting and optional long term backup retention in Amazon Simple Storage Service (S3) Amazon RDS on VMware is supporting Microsoft SQL Server PostgreSQL MySQL and MariaDB databases with Oracle to follow in the future Backups to the Cloud AWS storage solutions allow you to pay for only what you need AWS doesn’t require capacit y planning purchasing capacity in advance or any large upfront payments You get the benefits of AWS storage solutions without the upfront investment and hassle of setting up and maintaining an on premises system Amazon Simple Storage Service (Amazon S3 ) Using Amazon S3 you can take advantage of the flexibility and pricing of cloud storage S3 gives you the ability to back up SQL Server databases to a highly secure available durable reliable storage solution Many third party backup solutions are des igned to securely store SQL Server backups in Amazon S3 You can also design and develop a SQL Server backup solution yourself by using AWS tools like the AWS CLI AWS Tools for Windows PowerShell or a wide variety of SDKs for NET or Java and also the A WS Toolkit for Visual Studio AWS Storage Gateway AWS Storage Gateway is a service connecting an on premises software appliance with cloud based storage to provide seamless and secure integration between an organization’s on premises IT environment and AWS ’s storage infrastructure The service allows you to securely store data in the AWS Cloud for scalable and cost effective storage AWS Storage Gateway supports open standard storage protocols that work with your existing applications It provides low laten cy performance by maintaining frequently accessed data on premises while securely storing all of your data encrypted in Amazon S3 AWS Storage Gateway enables your existing on premises –to–cloud ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 39 backup applications to store primary backups on Amazon S3’s sc alable reliable secure and cost effective storage service SQL Server Log Shipping Between On Premises and Amazon EC2 Some AWS customers have already deployed SQL Server using a Windows Server Failover Cluster design in an on premises or colocated facility This approach provides high availability in the event of component failure within a data center but doesn’t protect against a significant outage impacting multiple components or the entire data center Other AWS customers have been using SQL Server synchronous mirroring to provide a high availability solution in their on premises data center Again this provides high availability in the event of component failure within the data center but doesn’t protect against a significant outage impactin g multiple components or the entire data center You can extend your existing on premises high availability solution and provide a disaster recover y solution with AWS by using the native SQL Server feature of log shipping SQL Server transaction logs can s hip from on premises or colocated data centers to a SQL Server instance running on an Amazon EC2 instance within a VPC This data can be securely transmitted over a dedicated network connection using AWS Direct Connect or over a secure VPN tunnel Once shi pped to the Amazon EC2 instance these transaction log backups are applied to secondary DB instances You can configure one or multiple databases as secondary databases An optional third Amazon EC2 instance can be configured to act as a monitor an instan ce that monitors the status of backup and restore operations and raises events if these operations fail ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 40 Figure 3: Hybrid SQL Server Log Shipping SQL Server Always On Availability Groups Between OnPremises and Amazon EC2 SQL Server Always On availability groups is an advanced enterprise level feature to provide high availability and disaster recovery solutions This feature is available when deploying the Enterprise Edition of SQL Server 2012 2014 2016 or 2017 within the AWS Cloud on Amazon EC2 or on physical or virtual machines deployed in on premises or colocated data centers SQL Server 201 6 and SQL Server 201 7 standard edition provides basic high availability two node single database failover non readable secondary You can also setup th e Always On availability groups on Linux based SQL Server by using PaceMaker for clustering instead of using the Windows Server Failover Clustering (WSFC) If you have existing onpremises deployments of SQL Server Always On availability groups you might want to use the AWS Cloud to provide an even higher level of availability and disaster recovery To do so you can extend your data center into a VPC by using a dedicated network connection like AWS Direct Connect or setting secure VPN tunnels between thes e two environments Consider the following points when planning a hybrid implementation of SQL Server Always On availability groups: • Establish secure reliable and consistent network connection between on premises and AWS (using AWS Direct Connect or VPN ) • Create a VPC based on the Amazon VPC service ArchivedAmazon Web Services Deploy ing Microsoft SQL Server on Amazon Web Services Page 41 • Use Amazon VPC route tables and security groups to enable the appropriate communicate between the new environments • Extend Active Directory domains into the VPC by deploying domain controllers as Amazon EC2 instances or using the AWS Directory Service AD Connector service • Use synchronous mode between SQL Server instances within the same environment (for example all instances on premises or all instances in AWS) • Use asynchronous mode between SQL Server instances in different environments (for example instance in AWS and on premises) Figure 4: Always On availability groups You can also use the distributed availability groups This type of availabilit y group is supported in SQL Server 2016 and later versions Distributed availability groups span two separate availability groups and you can use them for AWS as a DR solution or migrating on premises Amazon EC2 ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 42 Figure 5: Hybrid Windows Server Failover Cluster AWS Database Migration Service AWS Database Migration Service helps you migrate databases to AWS easily and securely When you use the AWS Database Migration Service the source database remains fully operational during the migration minimizing downtime to applications that rely on the database You can begin a database migration with just a few clicks in the AWS Management Console Once the migration has started AWS manages many of the complexities of the migration process like data type transformation compression and parallel transfer (for faster data transfer) while ensuring that data changes to the source database that occur during the migration process are automatically replicated to the target The service is intended to support migrations to and from AWS hosted databases where both the source and destination engine are the same and also heterogeneous data sources Comparison of Microsoft SQL Server Feature Availability on AWS The following t able shows a side byside comparison of available features of SQL Server in the AWS environment ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Servi ces Page 43 Table 2: SQL Server features on AWS Amazon RDS Amazon EC2 SQL Server Editions Supported Versions Supported Versions Express 2012 2014 2016 2017 2012 2014 2016 2017 Web 2012 2014 2016 2017 2012 2014 2016 2017 Standard 2012 2014 2016 2017 2012 2014 2016 2017 Enterprise 2012 2014 2016 2017 2012 2014 2016 2017 SQL Server Editions Installation Method Installa tion Method Express N/A AMI Manual install Web N/A AMI Manual install Standard N/A AMI Manual install Enterprise N/A AMI Manual install Manageability Benefits Supported Supported Managed Automated Backups Yes No (need to configure and manage maintenance plans or use third party solutions) Multi AZ with Automated Failover Yes Enterprise Edition only (with manual configuration of Always On Availability Groups) Builtin Instance and Database Monitoring and Metrics Yes No (push your own metrics to CloudWatch or use third party solution) Automatic Software Patching Yes No Preconfigured Parameters Yes No (default SQL Server installation only) DB Event Notifications Yes No (manually track and manage DB events) SQL Server Feature Supported Supported SQL Authentication Yes Yes Windows Authentication Yes Yes ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 44 Amazon RDS Amazon EC2 TDE (encryption at rest) Yes (Enterprise Edition only) Yes (Enterprise Edition only) Encrypted Storage using AWS KMS Yes (all editions except Express ) Yes SSL (encryption in transit) Yes Yes Database Replication No (Limited Push Subscription) Yes Log Shipping No Yes Database Mirroring Yes (Multi AZ) Yes Always On Availability Groups Yes Yes Max Number of DBs per Instance Depends on the instance size and MultiAZ configuration None Rename existing databases Yes (Single AZ only) Yes (not available for databases in Availability Groups or enabled for mirroring) Max Size of DB Instance 16 TiB None Min Size of DB Instance 20 GB (Web Express) 200 GB ( Standard Enterprise ) None Increase Storage Size Yes Yes BACKUP Command Yes Yes RESTORE Command Yes Yes SQL Server Analysis Services Data source only* Yes SQL Server Integration Services Data source only* Yes SQL Server Reporting Services Data source only* Yes Data Quality Services No Yes Master Data Services No Yes Custom Set Time Zones Yes Yes SQL Server Mgmt Studio Yes Yes Sqlcmd Yes Yes SQL Server Profiler Yes (client side traces) Yes SQL Server Migration Assistance Yes Yes DB Engine Tuning Advisor Yes Yes ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 45 Amazon RDS Amazon EC2 SQL Server Agent Yes Yes Safe CLR Yes Yes Fulltext search Yes (except semantic search) Yes Spatial and location features Yes Yes Change Data Capture Yes (Enterprise Edition –All versions 2016/2017 Standard edition) Yes Change Tracking Yes Yes Columnstore Indexes 2012 and later (Enterprise ) 2012 and later (Standard Enterprise ) Flexible Server Roles 2012 and later 2012 and later Partially Contained Databases 2012 and later 2012 and later Sequences 2012 and later 2012 and later THROW statement 2012 and later 2012 and later UTF16 Support 2012 and later 2012 and later New Query Optimizer 2014 and later 2014 and later Delayed Transaction Durability (lazy commit) 2014 and later 2014 and later Maintenance Plans No** Yes Database Mail Yes Yes Linked Servers Yes Yes MSDTC No Yes Service Broker Yes (except Endpoints) Yes Performance Data Collector No Yes WCF Data Services No Yes FILESTREAM No Yes Policy Based Management No Yes SQL Server Audit Yes Yes BULK INSERT No Yes OPENROWSET Yes Yes Data Quality Services No Yes Buffer Pool Extensions No Yes Stretch Database No Yes ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 46 Amazon RDS Amazon EC2 Resource Governor No Yes Polybase No Yes Machine Learning & R Services No Yes File Tables No Yes * Amazon RDS SQL Server DB instances can be used as data sources for SSRS ** Amazon RDS provides a separate set of features to facilitate backup and recovery of databases *** We encourage our customers use the Amazon Simple Email Service (Amazon SES) to send outbound emails originating from AWS resources and ensure a high degree of deliverability For detailed list of features supported by the editions of SQL Server see High Availability in Microsoft Documentation Conclusion AWS provides two deployment platforms to deploy your SQL Server databases: Amazon RDS and Amazon EC2 Each platform provides unique benefits that might be beneficial to your specific use case but you have the flexibility to use one or both depending on your n eeds Understanding how to manage performance high availability security and monitoring in these environments is outlined in this whitepaper key to choosing the best approach for your use case Contributors Contributors to this document include : • Jugal S hah Solutions Architect Amazon Web Services • Richard Waymire Outbound Principal Architect Amazon Web Services • Russell Day Solutions Architect Amazon Web Services • Darryl Osborne Solutions Architect Amazon Web Services • Vlad Vlasceanu Solutions Archit ect Amazon Web Services ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 47 Further Reading For additional information see: • Microsoft Products on AWS • Active Directory Reference A rchitecture: Implementing Active Directory Domain Services on AWS • Remote Desktop Gateway on AWS • Securing the Microsoft Platform on AWS • Implementing Microsoft Windows Server Failover Clustering and SQL Server Always On Availability Groups in the AWS Cloud • AWS Directory Service • SQL Server Database Restore to Amazon EC2 Linux Docu ment Revisions Date Description November 2019 Updated with information on new features and changes: release of SQL Server 2016 and 2017 in RDS RDS Backup and SQL Server on EC2 Linux new instance classes ; updated screen captures architecture diagrams optimize CPU Hybrid Scenarios and other minor corrections and content updates June 2016 Updated with information on new features and changes: release of Amazon RDS SQL Server Windows Authentication; availability of SQL Server 20 14 in Amazon RDS; new RDS Reserved DB Instance pricing model availability of the AWS Database Migration Service; other minor corrections and content updates May 2015 First publication
General
Maximizing_Value_with_AWS
ArchivedMaximizing Value with AWS Achieve Total Cost of Operation Benefits Using Cloud Computing February 2017 This paper has been archived For the latest technical content about the AWS Cloud see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchived© 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedContents Introduction 1 Create a Culture of Cost Management 2 Driving Cost Optimization 2 Total Cost of Operation 4 Start with an Understanding of Current Costs 4 Total Cost of Migration 5 Select the Right Plan for Specific Workloads 6 Employ Best Practices 7 Determine TopLine Business Metrics 8 Stay on Top of Instance Utilization 8 Distribute Daily Spending Updates 8 Every Engineer Can Be a Cost Engineer 9 Build Automation into Services 9 Implement a Reservation Process 10 Conclusion 10 Contributors 10 Archived Abstract Amazon Web Services (AWS) provides rapid access to flexible and low cost IT resources With cloud computing public sector organizations no longer need to make large upfront investments in hardware or spend time and money on managing infrastructure The goal of this whitepaper is to help you gain insight into some of the financial considerations of operating a cloud IT environment and learn how to maximize the overall value of your decision to adopt AWS ArchivedAmazon Web Services – Maximizing Value with AWS Page 1 Introduction A core reason organizations adopt a cloud IT infrastructure is to save money The traditional approach of analyzing Total Cost of Ownership no longer applies when you move to the cloud Cloud services provide the opportunity for you to use only what you need and pay only for what you use We refer to this new paradigm as the Total Cost of Operation You can use Total Cost of Operation (TCO) analysis methodologies to compare the costs of owning a traditional data center with the costs of operating your environment using AWS Cloud services Eliminate Upfront Sunk Costs Organizations considering a transition to the cloud are often driven by their need to become more agile and innovative The traditional capital expenditure ( CapEx ) funding model makes it difficult to quickly test new ideas The AWS Cloud model gives you the agility to quickly spin up new instances on AWS and the ability to try out new services without investing in large upfront sunk costs (costs that have already been incurred and can’t be recovered) If you are using the cloud you can return CapEx to the general fund and invest in activities that better serve your constituents AWS helps lower customer costs through its “pay only for what you use” pricing model To get started it is critical to understand how to measure value improve the economics of a migration project manage migration costs and expectations through largescale IT transformations and optimize the cost of operation Launch an Amazon EC2 Instanc e for Free The AWS Free Tier lets you gain free hands on experience with AWS products and services AWS Free Tier includes 750 hours of Linux and Windows t2micro instances each month for one year To stay within the Free Tier use only EC2 Micro instance s View AWS Free Tier Details » ArchivedAmazon Web Services – Maximizing Value with AWS Page 2 Create a Culture of Cost Management All teams can help manage costs and cost optimization should be everyone’s responsibility There are many variables that affect cost with different levers that can be pulled to drive operational excellence By using resources like the AWS Trusted Advisor dashboard and the AWS Billing Cost Explorer tool you can get realtime feedback on costs and usage that puts you on the road to operational excellence  Put data in the hands of everyone – This reduces the feedback loop between the information/data and the action that is required to correct usage and sizing issues  Enact policies and evangelize – Define and implement best practices to drive operational excellence  Spend time training – Educate staff on the items that affect cost and the steps they can take to eliminate waste  Create incentives for good behavior – Have friendly competitions across teams to encoura ge cost efficiencies throughout the organization To achieve true success cost optimization must be come a cultural norm in your organization Get everyone involved Encourage everyone to track their cost optimization daily so they can establish a habit of efficiency and see the daily impact over time of their cost savings Although everyone shares the ownership of cost optimization someone should be tasked with cost optimization as a primary responsibility Typically this is someone from either t he finance or IT department who is responsible for ensuring that cost controls are monitored so that business goals can be met The “cost optimization engineer” makes sure that the organization is positioned to derive optimal value out of the decision to adopt AWS Driving Cost Optimization By moving to the consumptionbased model of the cloud you can increase innovation with in the organization However one of the biggest challenges of the consumptionbased model is the lack of predictability ArchivedAmazon Web Services – Maximizing Value with AWS Page 3 You need to balance agility and innovation against cost As multiple teams spin up instances to test new ideas it is important to control and optimize AWS spending as cloud usage increases Don’t target cost savings as the end goal Instead optimize spending by focus ing on business growth opportunities that can result from innovative ideas The following table contrasts the traditional funding model against the cloud funding model Funding Model Characteristics Traditional Data Center  A few big purchase decisions are made b y a few people every few years  Typically o verprovision ed as a result of planning up front for spikes in usage Cloud  Decentrali zed spending power  Small decisions made by a lot of people  Resources are spun up and down as new services are designed and then decommissioned  Cost ramifications felt by the organization as a whole are closely monitored and tracked Give stakeholders access to your spending fundamentals The data is there Share it By using dashboards you can quickly highlight spending habits across your teams  Actively manage workloads Turn services on and off as needed rather than runn ing them 24/ 7  Eliminate surprises Provide visibility into costs by making dashboard review a daily habit  Make cost optimization a joint effort Have “spenders” (those spinning up resources) work closely with “watchers” (finance and leadership who can track to business goals)  Allocate charges (or show departmental usage) to organizations actually using services This provides insight into each group’s impact on business goals  Savings Know who uses services and how they use services To select the best rate evaluate pricing options that best meet the workload  Tie spending to business metrics Determine what gets measured track usage and identify areas for improvement ArchivedAmazon Web Services – Maximizing Value with AWS Page 4  Use innovative approaches to optimize spend Consider policies such as “default off” for test and dev environments as opposed to 24/7 or even “on during business hours” Total Cost of Operation A pay asyougo model reduces investments in large capital expenditures In addition you can reduce the operating expense (OpEx) costs involved with the management and maintenance of data This frees up budget allowing you to quickly act on innovative initiatives that can’t be easily pursued when managing CapEx A clear understanding of your current costs is an important first step of a cloud migration journey This provides a baseline for defining the migration model that delivers optimal cost efficiency Our online total cost of ownership calculators allow you to estimate cost savings when using AWS These calculators provide a detailed set of reports that you can use in executive presentations The calculators also give you the option to modify assumptions so you can best meet your business needs Ready to find out how much you could be saving in the AWS Cloud? Take a look at the AWS Total Cost of Ownership Calculator Start with an Understanding of Current Costs Evaluate the following when calculating your onpremises computing costs:  Labor How much do you spend on maintaining your environment?  Network How much bandwidth do you need? What is your bandwidth peak to average ratio? What are you assuming for network gear? What if you need to scale beyond a single rack?  Capacity How do you plan for capacity? What is the cost of over provisioning for peak capacity? What if you need less capacity? Anticipating next year? ArchivedAmazon Web Services – Maximizing Value with AWS Page 5  Availability/Power Do you have a disaster recovery (DR) facility? What was your power utility bill for your data centers last year? Have you budgeted for both average and peak power requirements? Do you have separate costs for cooling/ HVAC? Are you accounting for 2N (parallel redundancy) power? If not what happens when you have a power issue to your rack?  Servers What is your average server utilization? How much do you overprovision for peak load? What is the cost of overprovisioning?  Space Will you run out of data center space? When is your lease up? Total Cost of Migration To achieve the maximum benefits of the AWS Cloud it is important to understand and plan for the financial costs associated with migrating workloads to AWS While there isn’t yet a simple calculation for the total cost of migration (TCM) it is possible to estimate the cost and duration of the migration phase based on the experiences of others Some of the inputs for TCM include the following :  IT staff will need to acquire new skills  New business processes will need to be defined  Existing business processes will need to be modified  Cost of discovery and migration tooling needs to be calculated  Duplicate environments will need to run until one is decommissioned  Penalties could be incurred for breaking data center colocation or licensing agreements AWS uses the term migration bubble to describe the time and cost of moving applications and infrastructure from onpremises data centers to the AWS Cloud Altho ugh the cloud can provide significant savings certain costs may increase as you move into the migration bubble It is important to understand the costs associated with migration so that you can work to shrink the size of the migration bubble and accomplish the migration in a quick and sustainable manner ArchivedAmazon Web Services – Maximizing Value with AWS Page 6 Figure 1: Migration bubble To realize cost savings it is important to plan your migration to coincide with hardware retirement license and maintenance expiration and other opportunities to be frugal with your resources In addition the savings and cost avoidance associated with a full allin migration to AWS can help you fund the migration bubble You can even shorten the duration of the migration by applying more resources when appropriate For more information read the AWS Cloud Adoption Framework whitepaper Select the Right Plan for Specific Workloads Depending on your needs you can choose among three different ways to pay for Amazon Elastic Compute Cloud (EC2) instances: OnDemand Reserved Instances and Spot Instances You can also pay for Dedicated Hosts that provide you with EC2 instance capacity on physical servers dedicated for your use ArchivedAmazon Web Services – Maximizing Value with AWS Page 7 Purchasing Options Description Recommended for OnDemand Instances Pay for compute capacity by the hour with no long term commitments or upfront payment s  Increase or decrease compute capacity depending on the demands of your application  Only pay the specified hourly rate for the instances you use  Users that want the low cost and flexibility of Amazon EC2 without any upfront payment or long term commitment  Applications with short term spiky or unpredictable workloads that cannot be interrupted  Applications being developed on AWS the first time Reserved Instances Can provide significant savings compared to using On Demand instances  Sunk cost but the longer term commitment delivers a lower hourly rate  Applications that have been in use for years and that you plan to continue to use  Applications with steady state or predictable usage  Applications that require reserved capacity  Users who want to make upfront payments to further reduce their total computing costs Spot Instances Provide the ability to purchase compute capacity with no upfront commitment and lower hourly rates  Allow you to specify the maximum hourly price that yo u are willing to pay to run a particular instance type  Applications that have flexible start and end times  Applications that are only feasible at very low compute prices  Users with urgent computing needs for large amounts of additional capacity Dedi cated Hosts Physical EC2 server s with instance capacity fully dedicated for your use  Help reduce costs by using existing server bound software licenses  Can provide up to a 70% discount compared to the On Demand price  Users who want to save money by using their own per socket or per core software in Amazon EC2  Users who deploy instances using configurations that help address corporate compliance and regulatory requirements Learn more about Amazon EC2 Instance Purchasing Options Employ Best Practices As your organization transitions to the cloud and you pilot new cloud initiatives be careful to avoid common pitfalls The best practices presented below can help you ArchivedAmazon Web Services – Maximizing Value with AWS Page 8 Determine TopLine Business Metrics To fully benefit from the cloud it is important to map business goals to specific metrics so that you can evaluate where changes need to be made Define the metrics that provide the most us eful data to track your service such as user subscriber customer access API calls and page views Dashboards are an excellent source of information and provide instant feedback on how services are delivering against specific goal s Stay on Top of Instance Utilization Oversight is an excellent practice to make sure that you are not overspending Monitoring tools provide visibility control and optimization Post DevOps use dashboards to monitor how services are used as well as your current spending profile If your monthly bill goes up make sure it is for the right reason (business growth) and not the wrong reason (waste)  Choose a cadence and regularly measure results for services that have moved to the cloud  Use tools that track performance and usage to reduce cost overruns It only takes five minutes to resize – up or down – to ensure that the service is providing the desired performance level  Keep track of running instances Optimize the size of servers and adjust as needed rather than overprovisioning from the start  If an instance is underutilized determine if you still need the instance if it can be shut down or if it needs to be resized  As AWS introduces new technology find and then upgrade your legacy instances so that you can lower costs This can provide substantial savings over time Distribute Daily Spending Updates Make usage reviews a daily habit for all team members Provide weekly reporting to elevate visibility and drive accountability across large complex organization s Have teams review bills associated with their projects to identify ways to optimize for costs during dev/test as well as production And to create an ArchivedAmazon Web Services – Maximizing Value with AWS Page 9 atmosphere of friendly competition create a leaderboard that highlights teams with the best cost efficiencies Every Engineer Can Be a Cost Engineer Engineers should design code so that instances only spin up when needed and spin down when not in use There is no need to have AWS services running 24/ 7 if they are only used during standard work hours Turn off underutilized instances that you discover using dashboards and reports  Innovate Spin up instances to test new ideas If the ideas work keep the instance for further refinement If not spin it down  Build sizing into architecture Use tagging to help with cost allocation Tagging allows you to track the users of particular instances optimize usage and bill back or show charges by department or user  Schedule dev/test Eliminate waste of resources not in use Eliminate Waste Default = Off is a good best practice Build Automation into Services Automation can accelerate the migration process  Automate process es so that they turn off when not in use to eliminate waste  Automate alerts to show when thresholds have been exceeded  Configuration management With automation every machine defined in code spins up or down as needed to drive performance and cost optimization  Set alerts on old snapshots oversized resources and unattached volumes and then automate and rebalance for optimal sizing  Eliminate troubleshooting If an instance goes down spin up a new one Stop wasting time on unproductive activities ArchivedAmazon Web Services – Maximizing Value with AWS Page 10 Implement a Reservation Process Appoint someone to own the reservation process (typically a finance person) Buy on a regular schedule but continually track usage and modify reservations as need ed This can result in big savings over time See How to Purchase Reserved Instances for more information Conclusion Moving business applications to the AWS Cloud helps organizations simplify infrastructure management deploy new services faster provide greater availability and lower costs Having a clear understanding of your existing infrastructure and migration costs and then projecting your savings will help you calculate payback time project ROI and maximize the value your organization gains from migrating to AWS AWS delivers a mature set of services specifically designed for the unique security compliance privacy and governance requirements of large organizations With a technology platform that is both broad and deep professional services and support organizations robust training programs and an ecosystem that is tens ofthousands of partners strong AWS can help you move faster and do more Contributors The following individuals and organizations contributed to this document:  Blake Chism Practice Manager AWS Public Sector SalesVar  Carina Veksler Public Sector Solutions AWS Public Sector SalesVar
General
Demystifying_the_Number_of_vCPUs_for_Optimal_Workload_Performance
ArchivedDemystifying the Number of vCPUs for Optimal Workload Performance September 2018 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchivedAmazon Web Services – Demystifying the Number of vCPUs for Optimal Workload Performance Page 2 © 201 8 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its a ffiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – Demystifying the Number of vCPUs for Optimal Workload Performance Page 3 Contents Abstract 4 Introduction 5 Methodology 6 Discussion by Example 8 Best Practices 10 Conclusion 13 Contributors 13 ArchivedAmazon Web Services – Demystifying the Number of vCPUs for Optimal Workload Performance Page 4 Abstract Following industry standard rules of thumb when migrating physical servers or desktops into a virtual environment doesn’t ensure optimal CPU performance after consolidation especially for CPU intensive workloads This paper describes a proven scientific methodology for benc hmarking CPU performance for different CPU generations with detailed examples to achieve optimal performance Learn how to choose Amazon EC2 instance types based on CPU resources and apply best practices for CPU selection with Amazon EC2 ArchivedAmazon Web Services – Demystifying the Number of vCPUs for Optimal Workload Performance Page 5 Introduction When you migrate physical servers or desktops to a virtual environment using a hypervisor (such as ESX Hyper V KVM Xen etc) you’re typically advised to follow industry standard rules of thumb for high workload consolidation For example you might b e advised to use 1 CPU core for every 2 virtual machines (VMs) However this ratio might not provide a realistic estimate for CPUs with high clock speeds such as thos e running at 16 GHz to 33 GHz You should use a higher consolidation ratio with faster CPUs New generation CPUs provide better performance even when running at the same clock speed or with the same number of CPU cores compared with prior generation CPUs The price performance ratio w ith new CPUs is better as well So how do we benchmark the CPU performance for different CPU generations to get the optimal performance after VM consolidation? As part of the answer and to ensure predictable results we should have a scientific approach t o determine the most appropriate CPU sizing Remember that undersizing a CPU resource can cause poor user experience and oversizing a CPU resource can cause wasted resources and higher Operating Expenses (OPEX) yielding a higher Total Cost Ownership (TCO) This paper examine s a proven methodology for choosing the right Amazon Elastic Compute Cloud (EC2) instance types based on CPU resources and includes detailed examples In addition some best practices for CPU selection with Amazon EC2 are discussed ArchivedAmazon Web Services – Demystifying the Number of vCPUs for Optimal Workload Performance Page 6 Methodology Step 1: Normalize the CPU performance index (Pi) for different generation CPUs using the Moore’s Law equation1: 𝑃𝑖(𝑡)=2005556 (𝑡) (1) Where Pi (t) is the CPU perfor mance index at the reference month t = 0 In other words if we’re trying to migrate a system with a CPU A being first sold on Jan 2015 to CPU B being first sold on June 2016 then the performance index for CPU A is P i (0) = 1 and CPU B is P i (18) = 2 Step 2: Determine the normalized CPU utilization in term s of clock speed (GHz) of the current workload utiliza tion by inserting Equation (1) into Equation (2) The normalized CPU utilization (CPU Utilization (Norm) ) equation will be explained as shown below: 𝐶𝑃𝑈 𝑈𝑡𝑖𝑙𝑖𝑧𝑎𝑡𝑖𝑜𝑛 (𝑁𝑜𝑟𝑚 )= [#𝐶𝑃𝑈 ×#𝐶𝑜𝑟𝑒 ×𝐶𝑃𝑈 𝐹𝑟𝑒𝑞 ×𝐶𝑃𝑈 𝑈𝑡𝑖𝑙𝑖𝑧𝑎𝑡𝑖𝑜𝑛 ×𝑃𝑖(𝑡)] (2) Where ▪ #CPU = Current number of CPU sockets per physical server If it is a VM it should be equivalent to 1 ▪ #Core = Current number of CPU cores per physical server If it is a VM it should be equivalent to the number of currently deployed vCPUs (We are assuming that there is no oversubscription in this case ) If hyper threading is enabl ed th e number of CPU cores or v CPUs should be doubled 1 In the mid 1960s Gordon Moore the co founder of Intel made the observation that computer power measured by the number of transistors that could be fit onto a chip doubled every 18 months This law has performed extremely well over the preceding years ArchivedAmazon Web Services – Demystifying the Number of vCPUs for Optimal Workload Performance Page 7 ▪ CPU Freq = Current CPU clock speed in GHz ▪ CPU Utilization = Current CPU utilization as a percentage ▪ 𝑃𝑖(𝑡) = Performance index for vCPUs per month Step 3: Determine the estimated CPU utilization b y reserving sufficient buffer for a workload spike This is calculated by inserting the required headroom in term s of percentage (%) into Equation (3) This gives a conservative estimate of the CPU sizing to avoid suboptimal performance The estimated CPU utilization (CPU Utilization (Est)) equation is explained as shown below 𝐶𝑃𝑈 𝑈𝑡𝑖𝑙𝑖𝑧𝑎𝑡𝑖𝑜𝑛 (𝐸𝑠𝑡) = 𝐶𝑃𝑈 𝑈𝑡𝑖𝑙𝑖𝑧𝑎𝑡𝑖𝑜𝑛 (𝑁𝑜𝑟𝑚 ) × (1+𝐻𝑒𝑎𝑑𝑟𝑜𝑜𝑚 ) (3) Where 𝐻𝑒𝑎𝑑𝑟𝑜𝑜𝑚 = Percentage of CPU resource reserved as a buffer for a workload spike Step 4: Refer to Amazon EC2 Instance Types to find the most appropriate CPU type for particular instance classes by using Equ ation (4) 𝐶𝑃𝑈 𝑈𝑡𝑖𝑙𝑖𝑧𝑎𝑡𝑖𝑜𝑛(𝐸𝑠𝑡) ≤ 𝐶𝑃𝑈 𝐶𝑎𝑝𝑎𝑐𝑖𝑡𝑦(𝑛𝑒𝑤 )= [#𝑣𝐶𝑃𝑈 (𝑛𝑒𝑤 ) 2× 𝐶𝑃𝑈 𝐹𝑟𝑒𝑞(𝑛𝑒𝑤 )×𝑃𝑖(𝑛𝑒𝑤 )(𝑡)] (4) Where ▪ #𝑣𝐶𝑃𝑈 (𝑛𝑒𝑤 ) = Newly selected number of vCPUs for the Amazon EC2 instance It is divided by 2 since hyper threading is used on the Amazon EC2 instance ▪ #𝐶𝑃𝑈 𝐹𝑟𝑒𝑞 (𝑛𝑒𝑤 ) = Newly designated CPU clock speed (GHz) for the Amazon EC2 instance ▪ 𝑃𝑖(𝑛𝑒𝑤 )(𝑡) = Perf ormance index for new vCPUs per month ArchivedAmazon Web Services – Demystifying the Number of vCPUs for Optimal Workload Performance Page 8 Discussion by Example Step 1: Table 1 shows the performance index which is calculated by using Equation (1) for various CPU models The oldest CPU model Xeon E5640 is used as the benchmark Both the Xeon E5640 and E5647 models belong to the current state of usage Table 1: CPU Performance index for various CPU model s Step 2: Table 2 shows the total CPU utilization in GHz after using Equation (2) for all the physical ser vers’ workload s that will be migrated to Amazon EC2 Table 2: Normalized CPU utilization in GHz Step 3: Table 3 shows the estimated CPU utilization in GHz after we include the buffer using Equation ( 3) Table 3: Estimated CPU utilization in GHz Step 4: After reviewing Amazon EC2 Instance Types we decided to deploy M4 instances Table 4 shows the performance index that is calculated using Equation (1) by taking the CPU model Xeon E52686 v4 as reference t = 0 ArchivedAmazon Web Services – Demystifying the Number of vCPUs for Optimal Workload Performance Page 9 Table 4: Performance index for M4 class instances Table 5 illustrates the CPU capacity of M4 instances after normalization Model vCPU* CPU Freq (GHz) Mem (GiB) SSD Storage (GB) Perf Index Per Core CPU Capacity new (GHz) m4large 2/2 23 8 EBSonly 100 230 m4xlarge 4/2 23 16 EBSonly 100 460 m42xlarge 8/2 23 32 EBSonly 100 920 m44xlarge 16/2 23 64 EBSonly 100 1840 m410xlarge 40/2 23 160 EBSonly 100 4600 m416xlarge 64/2 23 256 EBSonly 100 7360 Table 5: M 4 class instances’ CPU capacity after normalization * The number of vCPUs is divided by 2 because each vCPU in an Amazon EC2 instance is a hyperthread of an Intel Xeon CPU core By comparing the results that you obtain from steps 3 and 4 Table 6 demonstrates the CPU selection mapping against each source machine that is being migrated to Amazon EC2 Host Name CPU Model Recommended AWS Instance Type Server01 Xeon E5640 m4large Server02 Xeon E5640 m4xlarge Server03 Xeon E5647 m4xlarge Server04 Xeon E5647 m42xlarge Table 6: Recommended instance type This example did n’t take into account memory storage or I/O factors For actual scenarios we should consider taking a more holistic view to optimally balance performance and TCO saving Amazon EC2 has many different classes of instance types such as Compute Optimized Me mory Optimized Storage Optimized IO Optimized and GPU Optimized – see https://awsamazoncom/ec2/instance CPU Model CPU Frequency (GHz) # Cores First Sold Performance Index Performance Index Per Core Xeon E52686 v4 230 180 Jun16 1796 100 ArchivedAmazon Web Services – Demystifying the Number of vCPUs for Optimal Workload Performance Page 10 types for more detailed information These different classes of instance types are optimized to deliver the best performance and TCO saving depending on your application’s behavior and usage characteristic s Best Practices 1 Assess the requirements of your applications and select the appropri ate Amazon EC2 instance family as a starting point for application performance testing Amazon EC2 provides you with a variety of instance types each with one or more size options organized into distinct instance families that are optimized for different types of applications You should start evaluating the performance of your applications by : a) Identifying how your application compare s to different instance families ( for example is the application compute bound memory bound or I/O bound ?) b) Sizing your w orkload to identify the appropriate instance size There is no substitute for measuring the performance of your entire application because application performance can be impacted by the underlying infrastructure or by software and architectural limitation s We recommend application level testing including the use of application profiling and load testing tools and services 2 Normalize generations of CPUs by using Moore’s Law Processing performance is usually bound to the number of CPU cores clock speed and type of CPU hardware instances that an application runs on A new CPU model will usually outperform the models it precedes even with the same number of cores and clock speed Therefore you should normalize different generations of CPUs by using Moore’s Law as shown earlier in Methodology to obtain more realistic comparison results 3 Have a data collection period that is long enough to capture the workload utilization pattern Workload changes in accordance with time shifting For analysis y our data collection period should be long enough to show you the peak and trough utilization across your business cycle (for example monthly or quarterly) You should include peak utiliza tion instead of average utilization for the purposes of CPU sizing This will ArchivedAmazon Web Services – Demystifying the Number of vCPUs for Optimal Workload Performance Page 11 ensure that you provide a consistent user experience when workloads are under peak utilization 4 Deploy discovery tools For large scale environments (more than a few hundred mach ines) deploy automated discovery tools such as the AWS Application Discovery Service to perform data collection It’s critical to ensure that the discovery tools includ e basic inventory capabilities to collect the required CPU inventory and utilization (maximum average and minimum) that are specified in Methodology Determine whether the discovery tool requires specific user permissions or secure/compliant port s to be open ed Also investigate whether the discovery tool requires the source machines to be rebooted to install agents In many critical production environments server rebooting is not permissible 5 Allocate enough buffer for spikes When you perform the CPU sizing and capacity planning always include a reasonable buffer of 10 –15% of total required capacity This buffer is crucial to avoid any overlap of scheduled and unscheduled processing that may cause unexpected spikes 6 Monitor continuously Carry out the performance benchmarks before and after migration to investigate user experience acceptance levels Deploy a cloud monitoring tool such as Amazon CloudWatch to monitor CPU performance The cl oud monitoring tool should use monitoring to send alerts if the CPU utilization exceeds the predefined threshold level The tool also should provide reporting capability that generate s relevant reports for short and long term capacity planning purpose s 7 Determine the right VM sizing A VM is considered undersized or stressed when the amount of CPU demand peaks above 70% for more than 1% of any 1 hour A VM is considered oversized when the amount of CPU demand is below 30% for more than 1% of the entire ra nge of 30 days Figure 1 and Figure 2 give a good illustration of determining stress analysis for undersized and oversized conditions ArchivedAmazon Web Services – Demystifying the Number of vCPUs for Optimal Workload Performance Page 12 Figure 1: CPU Undersized condition Figure 2: CPU Oversized condition 8 Deploy single threaded appli cations on uniproces sor virtual machines instead of on SMP virtual machines for the best performance and resource use Single threaded applications can take advantage of a single CPU Deploying such applications on dual processor virtual machines does not speed up the appli cation Instead it causes the second virtual CPU to unnecessarily hold physical resources that other VMs could otherwise use The uniprocessor operating system versions are for single core machines If used on a multi core machine a uniprocessor operating system will recognize and use only one of the cores The SMP versions while required to fully utilize multi core machines can also be used on single core machines However d ue to their extra synchronization code SMP operating sys tems used on single core machines run slightly slower than a uniprocessor operating system on the same machine ArchivedAmazon Web Services – Demystifying the Number of vCPUs for Optimal Workload Performance Page 13 9 Consider using Amazon EC2 Dedicated Instances and Dedicated Host s if you have compliance requirements Dedicated instances and host s don’t share hardware with other AWS accounts To learn more about the di fferences between them see awsamazoncom/ec2/dedicated hosts Conclusion The methodology and best practices discussed in this paper give a pragmatic result for optimal performance regarding selected CPU resource s This methodology has been applied to many enterprises’ cloud transformation projects and delivered more predictable performance with significant TCO saving Additionally this methodology can be adopted for capacity planning and helps enterprises establish strong business justifications for platform expansion Actual performance sizing in a cloud environment should inc lude memory storage I/O and network traffic performance metrics to give a holistic performance sizing overview Contributors The following individual s and organizations contributed to this document: Tan Chin Khoon Enterprise Migration Architect – APAC For a more comprehensive and holistic example and discussion of cloud environment consolidation please contact Tan Chin Khoon Document Revisions Date Description September 2018 Updated formulas and instructions August 2016 First publication
General
AWS_Overview_of_Security_Processes
ArchivedAmazon Web Services: Overview of Security Processes March 2020 This paper has been archived For the latest technical content on Security and Compliance see https://awsamazoncom/ architecture/securityidentity compliance/ArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Introduction 1 Shared Security Responsibility Model 1 AWS Security Responsibilities 2 Customer Security Responsibilities 2 AWS Global Infrastructure Security 3 AWS Compliance Program 3 Physical and Environmental Security 4 Business Continuity Management 6 Network Security 7 AWS Access 11 Secure Design Principles 12 Change Management 12 AWS Account Security Features 14 Individual User Accounts 19 Secure HTTPS Access Points 19 Security Logs 20 AWS Trusted Advisor Security Checks 20 AWS Config Security Checks 21 AWS Service Specific Security 21 Compute Services 21 Networking Services 28 Storage Services 43 Database Services 55 Application Services 66 Analytics Services 73 Deployment and Management Services 77 ArchivedMobile Services 82 Applications 85 Document Revisions 88 ArchivedAbstract This document is intended to answer questions such as How does AWS help me ensure that my data is secure? Specifically this paper describes AWS physical and operational security processes for the network and server infrastructure under the management of AWS ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 1 Introduction Amazon Web Services (AWS) delivers a scalable cloud computing pl atform with high availability and dependability providing the tools that enable customers to run a wide range of applications Helping to protect the confidentiality integrity and availability of our customers’ systems and data is of the utmost importan ce to AWS as is maintaining customer trust and confidence Shared Security Responsibility Model Before covering the details of how AWS secures its resources it is important to understand how security in the cloud is slightly different than security in yo ur on premises data centers When you move computer systems and data to the cloud security responsibilities become shared between you and your cloud service provider In this case AWS is responsible for securing the underlying infrastructure that support s the cloud and you’re responsible for anything you put on the cloud or connect to the cloud This shared security responsibility model can reduce your operational burden in many ways and in some cases may even improve your default security posture witho ut additional action on your part Figure 1: AWS shared security responsibility model The amount of security configuration work you have to do varies depending on which services you select and how sensitive your data is However there are certain security ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 2 features —such as individual user accounts and credentials SSL/TLS for data transmissions and user activity logging —that you should configure no matter which AWS service you use For more information about these security featur es see the AWS Account Security Features sectio n AWS Security Responsibilities Amazon Web Services is responsible for protecting the global infrastructure that runs all of the services offered in the AWS Cloud Th is infrastructure comprise s the hardware software networking and facilities that run AWS services Protecting this infrastructure is the number one priority of AWS Although you can’t visit our data centers or offices to see this protection firsthand we provide several reports from third party auditors who have verified our compliance with a variety of computer security standards and regulations For more information visit AWS Compliance Note that in addition to protecting this global infrastructure AWS is responsible for the security configuration of its products that are considered managed services Examples of these types of services include Amazon DynamoDB Amazon RDS Amazon Redshift Amazon EMR Amazon WorkSpaces and several other services These services provide the scalability and flexibility of cloud based resources with the additional benefit of being managed For these services AWS handle s basic security tasks like guest operat ing system (OS) and database patching firewall configuration and disaster recovery For most of these managed services all you have to do is configure logical access controls for the resources and protect your account credentials A few of them may requ ire additional tasks such as setting up database user accounts but overall the security configuration work is performed by the service Customer Security Responsibilities With the AWS cloud you can provision virtual servers storage databases and desk tops in minutes instead of weeks You can also use cloud based analytics and workflow tools to process your data as you need it and then store it in your own data centers or in the cloud The AWS services that you use determine how much configuration work you have to perform as part of your security responsibilities AWS products that fall into the well understood category of Infrastructure asaService (IaaS) —such as Amazon EC2 Amazon VPC and Amazon S3 —are completely under your control and require you t o perform all of the necessary security configuration and management tasks For example for EC2 instances you’re responsible for management of the guest OS (including updates and security patches) any application ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 3 software or utilities you install on the instances and the configuration of the AWS provided firewall (called a security group) on each instance These are basically the same security tasks that you’re used to performing no matter where your servers are located AWS managed services like Amazon RDS or Amazon Redshift provide all of the resources you need to perform a specific task —but without the configuration work that can come with them With managed services you don’t have to worry about launching and maintaining instances patching the gues t OS or database or replicating databases —AWS handles that for you But as with all services you should protect your AWS Account credentials and set up individual user accounts with Amazon Identity and Access Management (IAM) so that each of your users h as their own credentials and you can implement segregation of duties We also recommend using multi factor authentication (MFA) with each account requiring the use of SSL/TLS to communicate with your AWS resources and setting up API/user activity logging with AWS CloudTrail For more information about additional measures you can take refer to the AWS Security Best Practices whitepaper and recommended reading on the AWS Security Learning webpage AWS Global Infrastructure Security AWS operates the global cloud infrastruct ure that you use to provision a variety of basic computing resources such as processing and storage The AWS global infrastructure includes the facilities network hardware and operational software (eg host OS virtualization software etc) that supp ort the provisioning and use of these resources The AWS global infrastructure is designed and managed according to security best practices as well as a variety of security compliance standards As an AWS customer you can be assured that you’re building w eb architectures on top of some of the most secure computing infrastructure in the world AWS Compliance Program AWS Compliance enables customers to understand the robust controls in place at AWS to maintain security and data protection in the cloud As systems are built on top of AWS cloud infrastructure compliance responsibilities are shared By tying together governance focused audit friendly service features with applicable compliance or audit standards AWS Compliance enablers build on traditional programs; helping customers to establish and operate in an AWS security control environment The IT infrastructure ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 4 that AWS provides to its customers is d esigned and managed in alignment with security best practices and a variety of IT security standards including: • SOC 1/SSAE 16/ISAE 3402 (formerly SAS 70) • SOC 2 • SOC 3 • FISMA DIACAP and FedRAMP • DOD CSM Levels 1 5 • PCI DSS Level 1 • ISO 9001 / ISO 27001 / ISO 27017 / ISO 27018 • ITAR • FIPS 140 2 • MTCS Level 3 • HITRUST In addition the flexibility and control that the AWS platform provides allows customers to deploy solutions that meet several industry specific standards including: • Criminal Justice Information Servi ces (CJIS) • Cloud Security Alliance (CSA) • Family Educational Rights and Privacy Act (FERPA) • Health Insurance Portability and Accountability Act (HIPAA) • Motion Picture Association of America (MPAA) AWS provides a wide range of information regarding its IT co ntrol environment to customers through white papers reports certifications accreditations and other third party attestations For m ore information see AWS Compliance Physical and Environmental Securit y AWS data centers are state of the art utilizing innovative architectural and engineering approaches Amazon has many years of experience in designing constructing and operating large scale data centers This experience has been applied to the AWS platform and infrastructure AWS data centers are housed in facilities that are not ArchivedAmazon Web Services Amazon Web Services: Overview of Secu rity Processes Page 5 branded as AWS facilities Physical access is strictly controlled both at the perimeter and at building ingress points by professional security staff utilizing video surveillan ce intrusion detection systems and other electronic means Authorized staff must pass two factor authentication a minimum of two times to access data center floors All visitors are required to present identification and are signed in and continually esc orted by authorized staff AWS only provides data center access and information to employees and contractors who have a legitimate business need for such privileges When an employee no longer has a business need for these privileges his or her access is immediately revoked even if they continue to be an employee of Amazon or Amazon Web Services All physical access to data centers by AWS employees is logged and audited routinely Fire Detection and Suppression Automatic fire detection and suppression equ ipment has been installed to reduce risk The fire detection system utilizes smoke detection sensors in all data center environments mechanical and electrical infrastructure spaces chiller rooms and generator equipment rooms These areas are protected by either wet pipe double interlocked pre action or gaseous sprinkler systems Power The data center electrical power systems are designed to be fully redundant and maintainable without impact to operations 24 hours a day and seven days a week Uninterru ptible Power Supply (UPS) units provide back up power in the event of an electrical failure for critical and essential loads in the facility Data centers use generators to provide back up power for the entire facility Climate and Temperature Climate control is required to maintain a constant operating temperature for servers and other hardware which prevents overheating and reduces the possibility of service outages Data centers are conditioned to maintain atmospheric conditions at optimal levels Personnel and systems monitor and control temperature and humidity at appropriate levels ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 6 Management AWS monitors electrical mechanical and life support systems and equipment so that any issues are immediately identified Preventative maintenance is performed to maintain the continued operability of equipment Storage Device Decommissioning When a storage device has reached the end of its useful life AWS procedures include a decommissioning process that is designed to prevent customer data from be ing exposed to unauthorized individuals AWS uses the techniques detailed in NIST 800 88 (“Guidelines for Media Sanitization”) as part of the decommissioning process Business Continuity Management Amazon’s infrastructure has a high level of availability a nd provides customers the features to deploy a resilient IT architecture AWS has designed its systems to tolerate system or hardware failures with minimal customer impact Data center Business Continuity Management at AWS is under the direction of the Ama zon Infrastructure Group Availability Data centers are built in clusters in various global regions All data centers are online and serving customers; no data center is “cold” In case of failure automated processes move customer data traffic away from t he affected area Core applications are deployed in an N+1 configuration so that in the event of a data center failure there is sufficient capacity to enable traffic to be load balanced to the remaining sites AWS provides you with the flexibility to pl ace instances and store data within multiple geographic regions as well as across multiple availability zones within each region Each availability zone is designed as an independent failure zone This means that availability zones are physically separated within a typical metropolitan region and are located in lower risk flood plains (specific flood zone categorization varies by Region) In addition to discrete uninterruptable power supply (UPS) and onsite backup generation facilities they are each fed vi a different grids from independent utilities to further reduce single points of failure Availability zones are all redundantly connected to multiple tier 1 transit providers You should architect your AWS usage to take advantage of multiple regions and availability zones Distributing applications across multiple availability zones provides ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 7 the ability to remain resilient in the face of most failure modes including natural disasters or system failures Incident Response The Amazon Incident Management team employs industry standard diagnostic procedures to drive resolution during business impacting events Staff operators provide 24x7x365 coverage to detect incidents and to manage the impact and resolution Company Wide Executive Review Amazon’s Internal Au dit group has recently reviewed the AWS services resiliency plans which are also periodically reviewed by members of the Senior Executive management team and the Audit Committee of the Board of Directors Communication AWS has implemented various methods of internal communication at a global level to help employees understand their individual roles and responsibilities and to communicate significant events in a timely manner These methods include orientation and training programs for newly hired employees ; regular management meetings for updates on business performance and other matters; and electronics means such as video conferencing electronic mail messages and the posting of information via the Amazon intranet AWS has also implemented various method s of external communication to support its customer base and the community Mechanisms are in place to allow the customer support team to be notified of operational issues that impact the customer experience A Service Health Dashboard i s available and maintained by the customer support team to alert customers to any issues that may be of broad impact The AWS Cloud Security Center is available to provide you with security and compliance details about AWS You can also subscribe to AWS Support offerings that include direct communication with the customer support team and proactive alerts to any customer impacting issues Network Security The AWS network has been architected to permit you to select the level of security and resiliency appropriate for your workload To enable you to build geographica lly dispersed fault tolerant web architectures with cloud resources AWS has implemented a world class network infrastructure that is carefully monitored and managed ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 8 Secure Network Architecture Network devices including firewall and other boundary devic es are in place to monitor and control communications at the external boundary of the network and at key internal boundaries within the network These boundary devices employ rule sets access control lists (ACL) and configurations to enforce the flow of information to specific information system services ACLs or traffic flow policies are established on each managed interface which manage and enforce the flow of traffic ACL policies are approved by Amazon Information Security These policies are auto matically pushed using AWS’s ACL Manage tool to help ensure these managed interfaces enforce the most up todate ACLs Secure Access Points AWS has strategically placed a limited number of access points to the cloud to allow for a more comprehensive monitoring of inbound and outbound communications and network traffic These customer access points are called API endpoints and they allow secure HTTP access (HTTPS) which allows you to establish a secure communication session with your st orage or compute instances within AWS To support customers with FIPS cryptographic requirements the SSL terminating load balancers in AWS GovCloud (US) are FIPS 140 2compliant In addition AWS has implemented network devices that are dedicated to manag ing interfacing communications with Internet service providers (ISPs) AWS employs a redundant connection to more than one communication service at each Internet facing edge of the AWS network These connections each have dedicated network devices Transmi ssion Protection You can connect to an AWS access point via HTTP or HTTPS using Secure Sockets Layer (SSL) a cryptographic protocol that is designed to protect against eavesdropping tampering and message forgery For customers who require additional lay ers of network security AWS offers the Amazon Virtual Private Cloud (VPC) which provides a private subnet within the AWS cloud and the ability to use an IPsec Virtual Private Network (VPN) device to provide an encrypted tunnel between the Amazon VPC and your data center For more information about VPC configuration options see the Amazon Virtual Private Cloud (Amazon VPC) Security section ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 9 Amazon Corporate Segregation Logically the AWS Production network is se gregated from the Amazon Corporate network by means of a complex set of network security / segregation devices AWS developers and administrators on the corporate network who need to access AWS cloud components in order to maintain them must explicitly req uest access through the AWS ticketing system All requests are reviewed and approved by the applicable service owner Approved AWS personnel then connect to the AWS network through a bastion host that restricts access to network devices and other cloud com ponents logging all activity for security review Access to bastion hosts require SSH public key authentication for all user accounts on the host For more information on AWS developer and administrator logical access see AWS Access below Fault Toleran t Design Amazon’s infrastructure has a high level of availability and provides you with the capability to deploy a resilient IT architecture AWS has designed its systems to tolerate system or hardware failures with minimal customer impact Data centers ar e built in clusters in various global regions All data centers are online and serving customers; no data center is “cold” In case of failure automated processes move customer data traffic away from the affected area Core applications are deployed in an N+1 configuration so that in the event of a data center failure there is sufficient capacity to enable traffic to be load balanced to the remaining sites AWS provides you with the flexibility to place instances and store data within multiple geographic regions as well as across multiple availability zones within each region Each availability zone is designed as an independent failure zone This means that availability zones are physically separated within a typical metropolitan region and are located i n lower risk flood plains (specific flood zone categorization varies by region) In addition to utilizing discrete uninterruptable power supply (UPS) and onsite backup generators they are each fed via different grids from independent utilities to further reduce single points of failure Availability zones are all redundantly connected to multiple tier 1 transit providers You should architect your AWS usage to take advantage of multiple regions and availability zones Distributing applications across multi ple availability zones provides the ability to remain resilient in the face of most failure scenarios including natural disasters or system failures However you should be aware of location dependent ArchivedAmazon Web Services Amazon Web Services: Overview of Securi ty Processes Page 10 privacy and compliance requirements such as the EU Da ta Privacy Directive Data is not replicated between regions unless proactively done so by the customer thus allowing customers with these types of data placement and privacy requirements the ability to establish compliant environments It should be noted that all communications between regions is across public internet infrastructure; therefore appropriate encryption methods should be used to protect sensitive data Data centers are built in clusters in various global regions including: US East (Norther n Virginia) US West (Oregon) US West (Northern California) AWS GovCloud (US) (Oregon) EU (Frankfurt) EU (Ireland) Asia Pacific (Seoul) Asia Pacific (Singapore) Asia Pacific (Tokyo) Asia Pacific (Sydney) China (Beijing) and South America (Sao Paulo) For a complete list of AWS R egions see the AWS Global Infrastructure page AWS GovCloud (US) is an isolated AWS Region designed to allow US government agencies and customers to move workloads into the cloud by helping them meet certain regulatory and compliance requirements The AWS GovCloud (US) framework allows US government agencies and their contractors to comply with US International Traffic in Arms Regulations (ITAR) reg ulations as well as the Federal Risk and Authorization Management Program (FedRAMP) requirements AWS GovCloud (US) has received an Agency Authorization to Operate (ATO) from the US Department of Health and Human Services (HHS) utilizing a FedRAMP accredit ed Third Party Assessment Organization (3PAO) for several AWS services The AWS GovCloud (US) Region provides the same fault tolerant design as other regions with two Availability Zones In addition the AWS GovCloud (US) region is a mandatory AWS Virtual Private Cloud (VPC) service by default to create an isolated portion of the AWS cloud and launch Amazon EC2 instances that have private (RFC 1918) addresses For more information see AWS GovCloud (US) Network Monitoring and Protection AWS u ses a wide variety of automated monitoring systems to provide a high level of service performance and availability AWS monitoring tools are designed to detect unusual or unauthorized activities and conditions at in gress and egress communication points These tools monitor server and network usage port scanning activities application usage and unauthorized intrusion attempts The tools have the ability to set custom performance metrics thresholds for unusual activ ity Systems within AWS are extensively instrumented to monitor key operational metrics Alarms are configured to automatically notify operations and management personnel when early warning thresholds are crossed on key operational metrics An on call ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 11 sche dule is used so personnel are always available to respond to operational issues This includes a pager system so alarms are quickly and reliably communicated to operations personnel Documentation is maintained to aid and inform operations personnel in han dling incidents or issues If the resolution of an issue requires collaboration a conferencing system is used which supports communication and logging capabilities Trained call leaders facilitate communication and progress during the handling of operatio nal issues that require collaboration Post mortems are convened after any significant operational issue regardless of external impact and Cause of Error (COE) documents are drafted so the root cause is captured and preventative actions are taken in the future Implementation of the preventative measures is tracked during weekly operations meetings AWS Access The AWS Production network is segregated from the Amazon Corporate network and requires a separate set of credentials for logical access The Amazo n Corporate network relies on user IDs passwords and Kerberos wh ereas the AWS Production network requires SSH public key authentication through a bastion host AWS developers and administrators on the Amazon Corporate network who need to access AWS clou d components must explicitly request access through the AWS access management system All requests are reviewed and approved by the appropriate owner or manager Account Review and Audit Accounts are reviewed every 90 days; explicit re approval is required or access to the resource is automatically revoked Access is also automatically revoked when an employee’s record is terminated in Amazon’s Human Resources system Windows and UNIX accounts are disabled and Amazon’s permission management system removes the user from all systems Requests for changes in access are captured in the Amazon permissions management tool audit log When changes in an employee’s job function occur continued access must be explicitly approved to the resource or it will be automati cally revoked ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 12 Background Checks AWS has established formal policies and procedures to delineate the minimum standards for logical access to AWS platform and infrastructure hosts AWS conducts criminal background checks as permitted by law as part of pre employment screening practices for employees and commensurate with the employee’s position and level of access The policies also identify functional responsibilities for the administration of logical access and security Credentials Policy AWS Securi ty has established a credentials policy with required configurations and expiration intervals Passwords must be complex and are forced to be changed every 90 days Secure Design Principles The AWS development process follows secure software development be st practices which include formal design reviews by the AWS Security Team threat modeling and completion of a risk assessment Static code analysis tools are run as a part of the standard build process and all deployed software undergoes recurring pene tration testing performed by carefully selected industry experts Our security risk assessment reviews begin during the design phase and the engagement lasts through launch to ongoing operations Change Management Routine emergency and configuration chan ges to existing AWS infrastructure are authorized logged tested approved and documented in accordance with industry norms for similar systems Updates to the AWS infrastructure are done to minimize any impact on the customer and their use of the servic es AWS will communicate with customers either via email or through the AWS Service Health Dashboard when service use is likely to be adversely affected Software AWS applies a systematic approach to mana ging change so that changes to customer impacting services are thoroughly reviewed tested approved and well communicated The AWS change management process is designed to avoid unintended service disruptions and to maintain the integrity of service to t he customer Changes deployed into production environments are: ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 13 • Reviewed – Peer reviews of the technical aspects of a change are required • Tested – Changes being applied are tested to help ensure they will behave as expected and not adversely impact perfor mance • Approved – All changes must be authorized in order to provide appropriate oversight and understanding of business impact Changes are typically pushed into production in a phased deployment starting with lowest impact areas Deployments are tested on a single system and closely monitored so impacts can be evaluated Service owners have a number of configurable metrics that measure the health of the service’s upstream dependencies These metrics are closely monitored with thresholds and alarmi ng in place Rollback procedures are documented in the Change Management (CM) ticket When possible changes are scheduled during regular change windows Emergency changes to production systems that require deviations from standard change management proced ures are associated with an incident and are logged and approved as appropriate Periodically AWS performs self audits of changes to key services to monitor quality maintain high standards and facilitate continuous improvement of the change management p rocess Any exceptions are analyzed to determine the root cause and appropriate actions are taken to bring the change into compliance or roll back the change if necessary Actions are then taken to address and remediate the process or people issue Infras tructure Amazon’s Corporate Applications team develops and manages software to automate IT processes for UNIX/Linux hosts in the areas of third party software delivery internally developed software and configuration management The Infrastructure team ma intains and operates a UNIX/Linux configuration management framework to address hardware scalability availability auditing and security management By centrally managing hosts through the use of automated processes that manage change Amazon is able to achieve its goals of high availability repeatability scalability security and disaster recovery Systems and network engineers monitor the status of these automated tools on a continuous basis reviewing reports to respond to hosts that fail to obtain or update their configuration and software ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 14 Internally developed configuration management software is installed when new hardware is provisioned These tools are run on all UNIX hosts to validate that they are configured and that software is installed in c ompliance with standards determined by the role assigned to the host This configuration management software also helps to regularly update packages that are already installed on the host Only approved personnel enabled through the permissions service may log in to the central configuration management servers AWS Account Security Features AWS provides a variety of tools and features that you can use to keep your AWS Account and resources safe from unauthorized use This includes credentials for access control HTTPS endpoints for encrypted data transmission the creation of separate IAM user accounts user activity logging for security monitoring and Trusted Advisor security checks You can take advantage of all of these security tools no matter which AWS services you select AWS Credentials To help ensure that only authorized users and processes access your AWS Account and resources AWS uses several types of credentials for authentication These include passwords cryptographic keys digital signatures and certificates We also provide the option of requiring multi factor authentication (MFA) to log into your AWS Account or IAM user accounts The following table highlights the various AWS credentia ls and their uses Table 1: Credential types and uses Credential Type Use Description Passwords AWS root account or IAM user account login to the AWS Management Console A string of characters used to log into your AWS account or IAM account AWS passwords must be a minimum of 6 characters and may be up to 128 characters Multi Factor Authentication (MFA) AWS root account or IAM user account login to the AWS Management Console A six digit single use code that is required in addition to your password to log in to your AWS Account or IAM user account ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 15 Credential Type Use Description Access Keys Digitally signed requests to AWS APIs (using the AWS SDK CLI or REST/Query APIs) Includes an access key ID and a secret access key You use access keys to digitally sign programmatic requests that you make to AWS Key Pairs SSH login to EC2 instances CloudFront signed URLs A key pair is required to connect to an EC2 instance launched from a public AMI The supported lengths are 1024 2048 and 4096 If you connect using SSH while using the EC2 Instance Connect API the supported lengths are 2048 and 4096 You can have a key pair generated automatically for you when you launch the instance or you can upload your own X509 Certificates Digitally signed S OAP requests to AWS APIs SSL server certificates for HTTPS X509 certificates are only used to sign SOAP based requests (currently used only with Amazon S3) You can have AWS create an X509 certificate and private key that you can download or you can upload your own certificate by using the Security Credentials page You can download a Credential Report for your account at any time from the Security Credentials page This report lists all of your account’s users and the status of their credentials —whether they use a password whether their password expires and must be changed regularly the last time they changed their password the last time they rotated their access keys and whether they have MFA enabled For security reasons if your credentials ha ve been lost or forgotten you cannot recover them or re download them However you can create new credentials and then disable or delete the old set of credentials In fact AWS recommends that you change (rotate) your access keys and certificates on a regular basis To help you do this without potential impact to your application’s availability AWS supports multiple concurrent access keys and certificates With this feature you can rotate keys and certificates into and out of operation on a regular bas is without any downtime to your application This can help to mitigate risk from lost or ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 16 compromised access keys or certificates The AWS IAM API enables you to rotate the access keys of your AWS Account as well as for IAM user accounts Passwords Password s are required to access your AWS Account individual IAM user accounts AWS Discussion Forums and the AWS Support Center You specify the password when you first create the account and you can change it at any time by going to the Security Credentials p age AWS passwords can be up to 128 characters long and contain special characters so we encourage you to create a strong password that cannot be easily guessed You can set a password policy for your IAM user accounts to ensure that strong passwords are used and that they are changed often A password policy is a set of rules that define the type of password an IAM user can set For more information about password policies see Managing Passwords for IAM Users AWS Multi Factor Authentication (MFA) AWS Multi Factor Authentication (MFA) is an additional layer of security for accessing AWS s ervices When you enable this optional feature you must provide a six digit single use code in addition to your standard user name and password credentials before access is granted to your AWS Account settings or AWS services and resources You get this s ingle use code from an authentication device that you keep in your physical possession This is called multi factor authentication because more than one authentication factor is checked before access is granted: a password (something you know) and the prec ise code from your authentication device (something you have) You can enable MFA devices for your AWS Account as well as for the users you have created under your AWS Account with AWS IAM In addition you add MFA protection for access across AWS Accounts for when you want to allow a user you’ve created under one AWS Account to use an IAM role to access resources under another AWS Account You can require the user to use MFA before assuming the role as an additional layer of security AWS MFA supports the use of both hardware tokens and virtual MFA devices Virtual MFA devices use the same protocols as the physical MFA devices but can run on any mobile hardware device including a smartphone A virtual MFA device uses a software application that generates sixdigit authentication codes that are compatible with the Time Based One Time Password (TOTP) standard as described in RFC 6238 Most virtual MFA applications allow you to host more than one virtual MFA device which makes them more convenient than har dware MFA devices However you should be ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 17 aware that because a virtual MFA might be run on a less secure device such as a smartphone a virtual MFA might not provide the same level of security as a hardware MFA device You can also enforce MFA authenticati on for AWS service APIs in order to provide an extra layer of protection over powerful or privileged actions such as terminating Amazon EC2 instances or reading sensitive data stored in Amazon S3 You do this by adding an MFA authentication requirement to an IAM access policy You can attach these access policies to IAM users IAM groups or resources that support Access Control Lists (ACLs) like Amazon S3 buckets SQS queues and SNS topics It is easy to obtain hardware tokens from a participating third party provider or virtual MFA applications from an AppStore and to set it up for use via the AWS website More information is available at AWS Multi Factor Authentication (MFA) Access Keys AWS requires that all API requests be signed —that is they must include a digital signature that AWS can use to verify the identity of the requestor You calculate the digital signature using a cryptographic hash function The input to the hash function in this case includes the text of your request and your secret access key If you use any of the AWS SDKs to generate requests the digital signature calculation is done for you; otherwise you can have your application calculate it and include it in your RE ST or Query requests by following the directions in Making Requests Using the AWS SDKs Not only does the signing process help protect message integrity by p reventing tampering with the request while it is in transit it also helps protect against potential replay attacks A request must reach AWS within 15 minutes of the time stamp in the request Otherwise AWS denies the request The most recent version of the digital signature calculation process is Signature Version 4 which calculates the signature using the HMAC SHA256 protocol Version 4 provides an additional measure of protection over previous versions by requiring that you sign the message using a ke y that is derived from your secret access key rather than using the secret access key itself In addition you derive the signing key based on credential scope which facilitates cryptographic isolation of the signing key Because access keys can be misuse d if they fall into the wrong hands we encourage you to save them in a safe place and not embed them in your code For customers with large fleets of elastically scaling EC2 instances the use of IAM roles can be a more secure and convenient way to manage the distribution of access keys IAM roles ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 18 provide temporary credentials which not only get automatically loaded to the target instance but are also automatically rotated multiple times a day Key Pairs Amazon EC2 instances created from a public AMI use a public/private key pair rather than a password for signing in via Secure Shell (SSH)The public key is embedded in your instance and you use the private key to sign in securely without a password After you create your own AMIs you can choose other mechanisms to securely log in to your new instances You can have a key pair generated automatically for you when you launch the instance or you can upload your own Save the private key in a safe place on your system and record the location where you sa ved it For Amazon CloudFront you use key pairs to create signed URLs for private content such as when you want to distribute restricted content that someone paid for You create Amazon CloudFront key pairs by using the Security Credentials page CloudFr ont key pairs can be created only by the root account and cannot be created by IAM users X509 Certificates X509 certificates are used to sign SOAP based requests X509 certificates contain a public key and additional metadata (like an expiration date t hat AWS verifies when you upload the certificate) and is associated with a private key When you create a request you create a digital signature with your private key and then include that signature in the request along with your certificate AWS verifi es that you're the sender by decrypting the signature with the public key that is in your certificate AWS also verifies that the certificate you sent matches the certificate that you uploaded to AWS For your AWS Account you can have AWS create an X509 certificate and private key that you can download or you can upload your own certificate by using the Security Credentials page For IAM users you must create the X509 certificate (signing certificate) by using third party software In contrast with roo t account credentials AWS cannot create an X509 certificate for IAM users After you create the certificate you attach it to an IAM user by using IAM In addition to SOAP requests X509 certificates are used as SSL/TLS server certificates for customers who want to use HTTPS to encrypt their transmissions To use them for HTTPS you can use an open source tool like OpenSSL to create a unique ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 19 private key You’ll need the private key to create the Certificate Signing Request (CSR) that you submit to a cert ificate authority (CA) to obtain the server certificate You’ll then use the AWS CLI to upload the certificate private key and certificate chain to IAM You’ll also need an X509 certificate to create a customized Linux AMI for EC2 instances The certifi cate is only required to create an instance backed AMI (as opposed to an EBS backed AMI) You can have AWS create an X509 certificate and private key that you can download or you can upload your own certificate by using the Security Credentials page Individual User Accounts AWS provides a centralized mechanism called AWS Identity and Access Management (IAM) for creating and managing individual users within your AWS Account A user can be any individual system or application that interacts with AWS reso urces either programmatically or through the AWS Management Console or AWS Command Line Interface (CLI) Each user has a unique name within the AWS Account and a unique set of security credentials not shared with other users AWS IAM eliminates the need to share passwords or keys and enables you to minimize the use of your AWS Account credentials With IAM you define policies that control which AWS services your users can access and what they can do with them You can grant users only the minimum permis sions they need to perform their jobs See the AWS Identity and Access Management (AWS IAM) section for more information Secure HTTPS Access Points For greater communication security when accessing AWS resources you s hould use HTTPS instead of HTTP for data transmissions HTTPS uses the SSL/TLS protocol which uses public key cryptography to prevent eavesdropping tampering and forgery All AWS services provide secure customer access points (also called API endpoints) that allow you to establish secure HTTPS communication sessions Several services also now offer more advanced cipher suites that use the Elliptic Curve Diffie Hellman Ephemeral (ECDHE) protocol ECDHE allows SSL/TLS clients to provide Perfect Forward Sec recy which uses session keys that are ephemeral and not stored anywhere This helps prevent the decoding of captured data by unauthorized third parties even if the secret long term key itself is compromised ArchivedAmazon Web Services Amazon Web Services: Overview of Security P rocesses Page 20 Security Logs As important as credentials and encrypted endpoints are for preventing security problems logs are just as crucial for understanding events after a problem has occurred And to be effective as a security tool a log must include not just a list of what happened and when but also identif y the source To help you with your after thefact investigations and near real time intrusion detection AWS CloudTrail provides a log of events within your account For each event you can see what service was accessed what action was performed and who made the request CloudTrail captures API calls as well as other things such as console sign in events Once you have enabled CloudTrail event logs are delivered about every 5 minutes You can configure CloudTrail so that it aggregates log files from mu ltiple regions and/or accounts into a single Amazon S3 bucket By default a single trail will record and deliver events in all current and future regions In addition to S3 you can send events to CloudWatch Logs for custom metrics and alarming or you c an upload the logs to your favorite log management and analysis solutions to perform security analysis and detect user behavior patterns For rapid response you can create CloudWatch Events rules to take timely action to specific events By default log f iles are stored securely in Amazon S3 but you can also archive them to Amazon S3 Glacier to help meet audit and compliance requirements In addition to CloudTrail’s user activity logs you can use the Amazon CloudWatch Logs feature to collect and monitor system application and custom log files from your EC2 instances and other sources in near real time For example you can monitor your web server's log files for invalid user messages to detect unauthorized login attempts to your guest OS AWS Trusted Ad visor Security Checks The AWS Trusted Advisor customer support service not only monitors for cloud performance and resiliency but also cloud security Trusted Advisor inspects your AWS environment and makes recommendations when opportunities may exist to save money improve system performance or close security gaps It provides alerts on several of the most common security misconfigurations that can occur including leaving certain ports open that make you vulnerable to hacking and unauthorized access neglecting to create IAM accounts for your internal users allowing public access to Amazon S3 buckets not turning on user activity logging (AWS CloudTrail) or not using MFA on your root AWS Account You also have the option for a Security contact at your ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 21 organization to automatically receive a weekly email with an updated status of your Trusted Advisor security checks The AWS Trusted Advisor service provides four checks at no additional charge to all users including three important security checks: speci fic ports unrestricted IAM use and MFA on root account When you sign up for Business or Enterprise level AWS Support you receive full access to all Trusted Advisor checks AWS Config Security Checks AWS Config is a continuous monitoring and assessment service that records changes to the configuration of your AWS resources You can view the current and historic configurations of a resource and use this information to troubleshoot outages conduct security attack analysis and much more You can view the configuration at any point in time and use that information to re configure your resources and bring them into a steady state during an outage situation Using AWS Config Rules you can run continuous assessment checks on your resources to verify that the y comply with your own security policies industry best practices and compliance regimes such as PCI/HIPAA For example AWS Config provides a managed AWS Config Rules to ensure that encryption is turned on for all EBS volumes in your account You can als o write a custom AWS Config Rule to essentially “codify” your own corporate security policies AWS Config alerts you in real time when a resource is misconfigured or when a resource violates a particular security policy AWS Service Specific Security Not only is security built into every layer of the AWS infrastructure but also into each of the services available on that infrastructure AWS services are architected to work efficiently and securely with all AWS networks and platforms Each service prov ides extensive security features to enable you to protect sensitive data and applications Compute Services Amazon Web Services provides a variety of cloud based computing services that include a wide selection of compute instances that can scale up and do wn automatically to meet the needs of your application or enterprise ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 22 Amazon Elastic Compute Cloud (Amazon EC2) Security Amazon Elastic Compute Cloud ( Amazon EC2) is a key component in Amazon’s Infrastructure asaService (IaaS) providing resizable comput ing capacity using server instances in AWS’s data centers Amazon EC2 is designed to make web scale computing easier by enabling you to obtain and configure capacity with minimal friction You create and launch instances which are collections of platform hardware and software Multiple Levels of Security Security within Amazon EC2 is provided on multiple levels: the operating system (OS) of the host platform the virtual instance OS or guest OS a firewall and signed API calls Each of these items builds on the capabilities of the others The goal is to prevent data contained within Amazon EC2 from being intercepted by unauthorized systems or users and to provide Amazon EC2 instances themselves that are as secure as possible without sacrificing the flexib ility in configuration that customers demand Hypervisor Amazon EC2 currently utilizes a highly customized version of the Xen hypervisor taking advantage of paravirtualization (in the case of Linux guests) Because para virtualized guests rely on the hype rvisor to provide support for operations that normally require privileged access the guest OS has no elevated access to the CPU The CPU provides four separate privilege modes: 0 3 called rings Ring 0 is the most privileged and 3 the least The host OS executes in Ring 0 However rather than executing in Ring 0 as most operating systems do the guest OS runs in a lesser privileged Ring 1 and applications in the least privileged Ring 3 This explicit virtualization of the physical resources leads to a cl ear separation between guest and hypervisor resulting in additional security separation between the two Traditionally hypervisors protect the physical hardware and bios virtualize the CPU storage networking and provide a rich set of management capab ilities With the Nitro System we are able to break apart those functions offload them to dedicated hardware and software and reduce costs by delivering all of the resources of a server to your instances The Nitro Hypervisor provides consistent perform ance and increased compute and memory resources for EC2 virtualized instances by removing host system software components It allows AWS to offer larger instance sizes (like c518xlarge) that provide practically all of the resources from the server to cust omers Previously C3 and C4 instances each eliminated software components by moving VPC and EBS functionality ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 23 to hardware designed and built by AWS This hardware enables the Nitro Hypervisor to be very small and uninvolved in data processing tasks for ne tworking and storage Nevertheless as AWS expands its global cloud infrastructure Amazon EC2’s use of its Xenbased hypervisor will also continue to grow Xen will remain a core component of EC2 instances for the foreseeable future Instance Isolation Different instances running on the same physical machine are isolated from each other via the Xen hypervisor Amazon is active in the Xen community which provides awareness of the latest developments In addition the AWS firewall resides within the hypervi sor layer between the physical network interface and the instance's virtual interface All packets must pass through this layer thus an instance’s neighbors have no more access to that instance than any other host on the Internet and can be treated as if they are on separate physical hosts The physical RAM is separated using similar mechanisms Customer instances have no access to raw disk devices but instead are presented with virtualized disks The AWS proprietary disk virtualization layer automatical ly resets every block of storage used by the customer so that one customer’s data is never unintentionally exposed to another In addition memory allocated to guests is scrubbed (set to zero) by the hypervisor when it is unallocated to a guest The memor y is not returned to the pool of free memory available for new allocations until the memory scrubbing is complete AWS recommends customers further protect their data using appropriate means One common solution is to run an encrypted file system on top of the virtualized disk device ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 24 Figure 2: Amazon EC2 multiple layers of security Host Operating System: Administrators with a business need to access the management plane are required to use multi factor authentication to gain access to purpose built administration hosts These administrative hosts are systems that are specifically designed built configured and hardened to protect the management plane of the cloud All such access is logged and audited When an employee no longer has a business need to access the management plane the privileges and access to these hosts and relevant systems can be revoked Guest Operating System: Virtual instances are completely controlled by you the customer You have full root access or administrative control over accounts services and applications AWS does not have any access rights to your instances or the guest OS AWS recommends a base set of security best practices to include disabling password only access to your guests and utilizing some form of multi factor authentication to gain access to your instances (or at a minimum certificate based SSH Version 2 access) Additionally you should employ a privilege escalation mechanism with logging on a per user basis For example if the guest OS is Linux after hardening your instance you should utilize certificate based SSHv2 to access the virtual instance disable remote root login use command line logging and use ‘sudo’ for privilege escalation You should gene rate your own key pairs in order to guarantee that they are unique and not shared with other customers or with AWS ArchivedAmazon Web Services Amazon Web Services: Overview of Security Pro cesses Page 25 AWS also supports the use of the Secure Shell (SSH) network protocol to enable you to log in securely to your UNIX/Linux EC2 instances Aut hentication for SSH used with AWS is via a public/private key pair to reduce the risk of unauthorized access to your instance You can also connect remotely to your Windows instances using Remote Desktop Protocol (RDP) by utilizing an RDP certificate gener ated for your instance You also control the updating and patching of your guest OS including security updates Amazon provided Windows and Linux based AMIs are updated regularly with the latest patches so if you do not need to preserve data or customiza tions on your running Amazon AMI instances you can simply relaunch new instances with the latest updated AMI In addition updates are provided for the Amazon Linux AMI via the Amazon Linux yum repositories Firewall: Amazon EC2 provides a complete firewa ll solution; this mandatory inbound firewall is configured in a default deny all mode and Amazon EC2 customers must explicitly open the ports needed to allow inbound traffic The traffic may be restricted by protocol by service port as well as by source IP address (individual IP or Classless InterDomain Routing (CIDR) block) The firewall can be configured in groups permitting different classes of instances to have different rules Consider for example the case of a traditional three tiered web applica tion The group for the web servers would have port 80 (HTTP) and/or port 443 (HTTPS) open to the Internet The group for the application servers would have port 8000 (application specific) accessible only to the web server group The group for the databas e servers would have port 3306 (MySQL) open only to the application server group All three groups would permit administrative access on port 22 (SSH) but only from the customer’s corporate network Highly secure applications can be deployed using this ex pressive mechanism See the following figure ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 26 Figure 3: Amazon EC2 security group firewall The firewall isn’t controlled through the guest OS; rather it requires your X509 certificate and key to authorize changes thus adding an extra layer of security AWS supports the ability to grant granular access to different administrative functions on the instances and the firewall therefore enabling you to implement additional security through separation of duties The level of security afforded by the firewall is a function of which ports you open and for what duration and purpose The default state is to deny all incoming traffic and you should plan carefully what you will open when building and securing your applications W ell informed traffic management and security design are still required on a per instance basis AWS further encourages you to apply additional per instance filters with host based firewalls such as IPtables or the Windows Firewall and VPNs This can res trict both inbound and outbound traffic API Access: API calls to launch and terminate instances change firewall parameters and perform other functions are all signed by your Amazon Secret Access Key which could be either the AWS Accounts Secret Access Key or the Secret Access key of a user created with AWS IAM Without access to your Secret Access Key Amazon EC2 API calls cannot be made on your behalf In addition API calls can be encrypted with SSL to maintain confidentiality Amazon recommends alway s using SSL protected API endpoints Permissions: AWS IAM also enables you to further control what APIs a user has permissions to call ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 27 Elastic Block Storage (Amazon EBS) Security Amazon Elastic Block Storage ( Amazon EBS) allows you to create storage volum es from 1 GB to 16 TB that can be mounted as devices by Amazon EC2 instances Storage volumes behave like raw unformatted block devices with user supplied device names and a block device interface You can create a file system on top of Amazon EBS volume s or use them in any other way you would use a block device (like a hard drive) Amazon EBS volume access is restricted to the AWS Account that created the volume and to the users under the AWS Account created with AWS IAM if the user has been granted ac cess to the EBS operations thus denying all other AWS Accounts and users the permission to view or access the volume Data stored in Amazon EBS volumes is redundantly stored in multiple physical locations as part of normal operation of those services and at no additional charge However Amazon EBS replication is stored within the same availability zone not across multiple zones; therefore it is highly recommended that you conduct regular snapshots to Amazon S3 for long term data durability For customer s who have architected complex transactional databases using EBS it is recommended that backups to Amazon S3 be performed through the database management system so that distributed transactions and logs can be checkpointed AWS does not perform backups of data that are maintained on virtual disks attached to running instances on Amazon EC2 You can make Amazon EBS volume snapshots publicly available to other AWS Accounts to use as the basis for creating your own volumes Sharing Amazon EBS volume snapshots does not provide other AWS Accounts with the permission to alter or delete the original snapshot as that right is explicitly reserved for the AWS Account that created the volume An EBS snapshot is a block level view of an entire EBS volume Note that da ta that is not visible through the file system on the volume such as files that have been deleted may be present in the EBS snapshot If you want to create shared snapshots you should do so carefully If a volume has held sensitive data or has had files deleted from it a new EBS volume should be created The data to be contained in the shared snapshot should be copied to the new volume and the snapshot created from the new volume Amazon EBS volumes are presented to you as raw unformatted block devices that have been wiped prior to being made available for use Wiping occurs immediately before reuse so that you can be assured that the wipe process completed If you have procedures requiring that all data be wiped via a specific method such as those detailed in NIST 800 88 (“Guidelines for Media Sanitization”) you have the ability to do ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 28 so on Amazon EBS You should conduct a specialized wipe procedure prior to deleting the volume for compliance with your established requirements Encryption of sensitive data is generally a good security practice and AWS provides the ability to encrypt EBS volumes and their snapshots with AES 256 The encryption occurs on the servers that host the EC2 instances providing encryption of data as it moves between EC2 instan ces and EBS storage In order to be able to do this efficiently and with low latency the EBS encryption feature is only available on EC2's more powerful instance types (eg M3 C3 R3 G2) Auto Scaling Security Auto Scaling allows you to automatically scale your Amazon EC2 capacity up or down according to conditions you define so that the number of Amazon EC2 instances you are using scales up seamlessly during demand spikes to maintain performance and scales down automatically during demand lulls to m inimize costs Like all AWS services Auto Scaling requires that every request made to its control API be authenticated so only authenticated users can access and manage Auto Scaling Requests are signed with an HMAC SHA1 signature calculated from the requ est and the user’s private key However getting credentials out to new EC2 instances launched with Auto Scaling can be challenging for large or elastically scaling fleets To simplify this process you can use roles within IAM so that any new instances launched with a role will be given credentials automatically When you launch an EC2 instance with an IAM role temporary AWS security credentials with permissions specified by the role are securely provisioned to the instance and are made availa ble to your application via the Amazon EC2 Instance Metadata Service The Metadata Service make s new temporary security credentials available prior to the expiration of the current active credentials so that valid credentials are always available on the i nstance In addition the temporary security credentials are automatically rotated multiple times per day providing enhanced security You can further control access to Auto Scaling by creating users under your AWS Account using AWS IAM and controlling what Auto Scaling APIs these users have permission to call For m ore information about using roles when launching instances see Identity and Access Management for Amazon EC2 Networking Services Amazon Web Services provides a range of networking services that enable you to create a logically isolated network that you define establish a private network ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 29 connection to the AWS cl oud use a highly available and scalable DNS service and deliver content to your end users with low latency at high data transfer speeds with a content delivery web service Elastic Load Balancing Security Elastic Load Balancing is used to manage traffic o n a fleet of Amazon EC2 instances distributing traffic to instances across all availability zones within a region Elastic Load Balancing has all the advantages of an on premises load balancer plus several security benefits: • Takes over the encryption and decryption work from the Amazon EC2 instances and manages it centrally on the load balancer • Offers clients a single point of contact and can also serve as the first line of defense against attacks on your network • When used in an Amazon VPC supports crea tion and management of security groups associated with your Elastic Load Balancing to provide additional networking and security options • Supports end toend traffic encryption using TLS (previously SSL) on those networks that use secure HTTP (HTTPS) connec tions When TLS is used the TLS server certificate used to terminate client connections can be managed centrally on the load balancer rather than on every individual instance HTTPS/TLS uses a long term secret key to generate a short term session key to be used between the server and the browser to create the ciphered (encrypted) message Elastic Load Balancing configures your load balancer with a pre defined cipher set that is used for TLS negotiation when a connection is established between a client and your load balancer The pre defined cipher set provides compatibility with a broad range of clients and uses strong cryptographic algorithms However some customers may have requirements for allowing only specific ciphers and protocols (such as PCI S OX etc) from clients to ensure that standards are met In these cases Elastic Load Balancing provides options for selecting different configurations for TLS protocols and ciphers You can choose to enable or disable the ciphers depending on your specifi c requirements To help ensure the use of newer and stronger cipher suites when establishing a secure connection you can configure the load balancer to have the final say in the cipher suite selection during the client server negotiation When the Server Order Preference option is selected the load balancer select s a cipher suite based on the server’s prioritization ArchivedAmazon Web Services Amazon Web Services: Overview of Security Proce sses Page 30 of cipher suites rather than the client’s This gives you more control over the level of security that clients use to connect to your load ba lancer For even greater communication privacy Elastic Load Balanc ing allows the use of Perfect Forward Secrecy which uses session keys that are ephemeral and not stored anywhere This prevents the decoding of captured data even if the secret long term key itself is compromised Elastic Load Balancing allows you to identify the originating IP address of a client connecting to your servers whether you’re using HTTPS or TCP load balancing Typically client connection information such as IP address and p ort is lost when requests are proxied through a load balancer This is because the load balancer sends requests to the server on behalf of the client making your load balancer appear as though it is the requesting client Having the originating client IP address is useful if you need more information about visitors to your applications in order to gather connection statistics analyze traffic logs or manage whitelists of IP addresses Elastic Load Balancing access logs contain information about each HTTP and TCP request processed by your load balancer This includes the IP address and port of the requesting client the backend IP address of the instance that processed the request the size of the request and response and the actual request line from the client (for example GET http://wwwexamplecom: 80/HTTP/11) All requests sent to the load balancer are logged including requests that never made it to backend instances Amazon Virtual Private Cloud (Amazon VPC) Security Normally each Amazon EC2 insta nce that you launch is randomly assigned a public IP address in the Amazon EC2 address space Amazon VPC enables you to create an isolated portion of the AWS cloud and launch Amazon EC2 instances that have private (RFC 1918) addresses in the range of your choice (eg 10000/16) You can define subnets within your VPC grouping similar kinds of instances based on IP address range and then set up routing and security to control the flow of traffic in and out of the instances and subnets AWS offers a var iety of VPC architecture templates with configurations that provide varying levels of public access: • VPC with a single public subnet only Your instances run in a private isolated section of the AWS cloud with direct access to the Internet Network ACLs a nd security groups can be used to provide strict control over inbound and outbound network traffic to your instances ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 31 • VPC with public and private subnets In addition to containing a public subnet this configuration adds a private subnet whose instances a re not addressable from the Internet Instances in the private subnet can establish outbound connections to the Internet via the public subnet using Network Address Translation (NAT) • VPC with public and private subnets and hardware VPN access This config uration adds an IPsec VPN connection between your Amazon VPC and your data center effectively extending your data center to the cloud while also providing direct access to the Internet for public subnet instances in your Amazon VPC In this configuration customers add a VPN appliance on their corporate data center side • VPC with private subnet only and hardware VPN access Your instances run in a private isolated section of the AWS cloud with a private subnet whose instances are not addressable from the Internet You can connect this private subnet to your corporate data center via an IPsec VPN tunnel You can also connect two VPCs using a private IP address which allows instances in the two VPCs to communicate with each other as if they are within the s ame network You can create a VPC peering connection between your own VPCs or with a VPC in another AWS account within a single region Security features within Amazon VPC include security groups network ACLs routing tables and external gateways Each of these items is complementary to providing a secure isolated network that can be extended through selective enabling of direct Internet access or private connectivity to another network Amazon EC2 instances running within an Amazon VPC inherit all of t he benefits described below related to the guest OS and protection against packet sniffing Note however that you must create VPC security groups specifically for your Amazon VPC; any Amazon EC2 security groups you have created will not work inside your Amazon VPC Also Amazon VPC security groups have additional capabilities that Amazon EC2 security groups do not have such as being able to change the security group after the instance is launched and being able to specify any protocol with a standard pro tocol number (as opposed to just TCP UDP or ICMP) Each Amazon VPC is a distinct isolated network within the cloud; network traffic within each Amazon VPC is isolated from all other Amazon VPCs At creation time you select an IP address range for each Amazon VPC You may create and attach an Internet ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 32 gateway virtual private gateway or both to establish external connectivity subject to the controls below API Access: Calls to create and delete Amazon VPCs change routing security group and network A CL parameters and perform other functions are all signed by your Amazon Secret Access Key which could be either the AWS Account’s Secret Access Key or the Secret Access key of a user created with AWS IAM Without access to your Secret Access Key Amazon VPC API calls cannot be made on your behalf In addition API calls can be encrypted with SSL to maintain confidentiality Amazon recommends always using SSL protected API endpoints AWS IAM also enables a customer to further control what APIs a newly crea ted user has permissions to call Subnets and Route Tables: You create one or more subnets within each Amazon VPC; each instance launched in the Amazon VPC is connected to one subnet Traditional Layer 2 security attacks including MAC spoofing and ARP spo ofing are blocked Each subnet in an Amazon VPC is associated with a routing table and all network traffic leaving the subnet is processed by the routing table to determine the destination Firewall (Security Groups): Like Amazon EC2 Amazon VPC supports a complete firewall solution enabling filtering on both ingress and egress traffic from an instance The default group enables inbound communication from other members of the same group and outbound communication to any destination Traffic can be restric ted by any IP protocol by service port as well as source/destination IP address (individual IP or Classless Inter Domain Routing (CIDR) block) The firewall isn’t controlled through the guest OS; rather it can be modified only through the invocation of Amazon VPC APIs AWS supports the ability to grant granular access to different administrative functions on the instances and the firewall therefore enabling you to implement additional security through separation of duties The level of security afforded by the firewall is a function of which ports you open and for what duration and purpose Well informed traffic management and security design are still required on a perinstance basis AWS further encourages you to apply additional per instance filters with host based firewalls such as IP tables or the Windows Firewall ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 33 Figure 4: Amazon VPC network architecture Network Access Control Lists: To add a further layer of security within Amazon VPC you can configure network ACLs These are stateless traffic filters that apply to all traffic inbound or outbound from a subnet within Amazon VPC These ACLs can contain ordered rules to allow or deny traffic based upon IP protocol by service port as well as source/destination IP address Like security groups network ACLs are managed through Amazon VPC APIs adding an additional layer of protection and enabling additional security through separation of duties The diagram below depicts how the security controls above inter relate to enable flexible network topologies while providing complete control over network traffic flows ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 34 Figure 5: Flexible network topologies Virtual Private Gateway: A virtual private gateway enables private connectivity between the Amazon VPC and another network Network traffic within each virtual private gateway is isolated from network traffic within all other virtual private gateways You can establish VPN connections to the virtual private gateway from gateway devices at your p remises Each connection is secured by a pre shared key in conjunction with the IP address of the customer gateway device Internet Gateway: An Internet gateway may be attached to an Amazon VPC to enable direct connectivity to Amazon S3 other AWS services and the Internet Each instance desiring this access must either have an Elastic IP associated with it or route traffic through a NAT instance Additionally network routes are configured (see above) to direct traffic to the Internet gateway AWS provide s reference NAT AMIs that you can extend to perform network logging deep packet inspection application layer filtering or other security controls This access can only be modified through the invocation of Amazon VPC APIs AWS supports the ability to gr ant granular access to different administrative functions on the instances and the Internet gateway therefore enabling you to implement additional security through separation of duties You can use a network address translation (NAT) ArchivedAmazon Web Services Amazon Web Services: Overview of Security Process es Page 35 gateway to enable ins tances in a private subnet to connect to the internet or other AWS services but prevent the internet from initiating a connection with those instances Dedicated Instances: Within a VPC you can launch Amazon EC2 instances that are physically isolated at the host hardware level (ie they will run on single tenant hardware) An Amazon VPC can be created with ‘dedicated’ tenancy so that all instances launched into the Amazon VPC use this feature Alternatively an Amazon VPC may be created with ‘default’ tenancy but you can specify dedicated tenancy for particular instances launched into it Elastic Network Interfaces: Each Amazon EC2 instance has a default network interface that is assigned a private IP address on your Amazon VPC network You can create and attach an additional network interface known as an elastic network interface to any Amazon EC2 instance in your Amazon VPC for a total of two network interfaces per instance Attaching more than one network interface to an instance is useful when you want to create a management network use network and security appliances in your Amazon VPC or create dual homed instances with workloads/roles on distinct subnets A network interface 's attributes including the private IP address elastic IP addresses and MAC address follow s the network interface as it is attached or detached from an instance and reattached to another instance For m ore information about Amazon VPC see Amazon Virtual Private Cloud Addit ional Network Access Control with EC2 VPC If you launch instances in a Region where you did not have instances before AWS launched the new EC2 VPC feature (also called Default VPC) all instances are automatically provisioned in a ready touse default VPC You can choose to create additional VPCs or you can create VPCs for instances in regions where you already had instances before we launched EC2 VPC If you create a VPC later using regular VPC you specify a CIDR block create subnets enter the routing and security for those subnets and provision an Internet gateway or NAT instance if you want one of your subnets to be able to reach the Internet When you launch EC2 instances into an EC2 VPC most of this work is automatically performed for you When y ou launch an instance into a default VPC using EC2 VPC we do the following to set it up for you: • Create a default subnet in each Availability Zone ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 36 • Create an internet gateway and connect it to your default VPC • Create a main route table for your default VPC with a rule that sends all traffic destined for the Internet to the Internet gateway • Create a default security group and associate it with your default VPC • Create a default network access control list (ACL) and associate it with your default VPC • Associate the default DHCP options set for your AWS account with your default VPC In addition to the default VPC having its own private IP range EC2 instances launched in a default VPC can also receive a public IP The following table summarizes the diffe rences between instances launched into EC2 Classic instances launched into a default VPC and instances launched into a non default VPC Table 2: Differences between different EC2 instances Characteristic EC2Classic EC2VPC (Default VPC) Regular VPC IP address by default unless you specify otherwise during launch Unless you specify otherwise during launch Private IP address Your instance receives a private IP address from the EC2Classic range each time it's started Your instance receives a static private IP address from the address range of your default VPC Your instance receives a static private IP address from the address range of your VPC Multiple private IP addresses We select a single IP address for your instance Multiple IP addresses are not supported You can assign multiple private IP addresses to your instance You can assign multiple private IP addresses to your instance ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 37 Characteristic EC2Classic EC2VPC (Default VPC) Regular VPC Elastic IP address An EIP is disassociated from your instance when you stop it An EIP remains associated with your instance when you stop it An EIP remains associated with your instance when you stop it DNS hostnames DNS hostnames are enabled by default DNS hostnames are enabled by default DNS hostnames are disabled by default Security group A security group can reference security groups that belong to other AWS accounts A security group can reference security groups for your VPC only A security group can reference security groups for your VPC only Secu rity group association You must terminate your instance to change its security group You can change the security group of your running instance You can change the security group of your running instance Security group rules You can add rules for inboun d traffic only You can add rules for inbound and outbound traffic You can add rules for inbound and outbound traffic Tenancy Your instance runs on shared hardware; you cannot run an instance on single tenant hardware You can run your instance on shared hardware or single tenant hardware You can run your instance on shared hardware or single tenant hardware ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 38 Note: Security groups for instances in EC2 Classic are slightly different than security groups for instances in EC2 VPC For example you can add rules for inbound traffic for EC2 Classic but you can add rules for both inbound and outbound traffic to EC2 VPC In EC2 Classic you can’t change the security groups assigned to an instance after it’s launched but in EC2 VPC you can change secu rity groups assigned to an instance after it’s launched In addition you can't use the security groups that you've created for use with EC2 Classic with instances in your VPC You must create security groups specifically for use with instances in your VPC The rules you create for use with a security group for a VPC can't reference a security group for EC2 Classic and vice versa Amazon Route 53 Security Amazon Route 53 is a highly available and scalable Domain Name System (DNS) service that answers DNS q ueries translating domain names into IP addresses so computers can communicate with each other Route 53 can be used to connect user requests to infrastructure running in AWS – such as an Amazon EC2 instance or an Amazon S3 bucket – or to infrastructure o utside of AWS Amazon Route 53 lets you manage the IP addresses (records) listed for your domain names and it answers requests (queries) to translate specific domain names into their corresponding IP addresses Queries for your domain are automatically routed to a nearby DNS server using anycast in order to provide the lowest latency possible Route 53 makes it possible for you to manage traffic globally through a variety of routing types including Latency Based Routing (LBR) Geo DNS and Weighted Round Robin (WRR) —all of which can be combined with DNS Failover in order to help create a variety of low latency fault tolerant architectures The failover algorithms implemented by Amazon Route 53 are designed not only to route traffic to e ndpoints that are healthy but also to help avoid making disaster scenarios worse due to misconfigured health checks and applications endpoint overloads and partition failures Route 53 also offers Domain Name Registration – you can purchase and manage domain names such as examplecom and Route 53 will automatically configure default DNS settings for your domains You can buy manage and transfer (both in and out) domains from a wide selection of generic and country specific top level domains (TLDs) Du ring the registration process you have the option to enable privacy protection for your domain This option will hide most of your personal information from the public Whois database in order to help thwart scraping and spamming ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 39 Amazon Route 53 is built using AWS’s highly available and reliable infrastructure The distributed nature of the AWS DNS servers helps ensure a consistent ability to route your end users to your application Route 53 also helps ensure the availability of your website by providing health checks and DNS failover capabilities You can easily configure Route 53 to check the health of your website on a regular basis (even secure web sites that are available only over SSL) and to switch to a backup site if the primary one is unresponsiv e Like all AWS Services Amazon Route 53 requires that every request made to its control API be authenticated so only authenticated users can access and manage Route 53 API requests are signed with an HMAC SHA1 or HMAC SHA256 signature calculated from t he request and the user’s AWS Secret Access key Additionally the Amazon Route 53 control API is only accessible via SSL encrypted endpoints It supports both IPv4 and IPv6 routing You can control access to Amazon Route 53 DNS management functions by cr eating users under your AWS Account using AWS IAM and controlling which Route 53 operations these users have permission to perform Amazon CloudFront Security Amazon CloudFront gives customers an easy way to distribute content to end users with low latenc y and high data transfer speeds It delivers dynamic static and streaming content using a global network of edge locations Requests for customers’ objects are automatically routed to the nearest edge location so content is delivered with the best possi ble performance Amazon CloudFront is optimized to work with other AWS services like Amazon S3 Amazon EC2 Elastic Load Balancing and Amazon Route 53 It also works seamlessly with any non AWS origin server that stores the original definitive versions of your files Amazon CloudFront requires every request made to its control API be authenticated so only authorized users can create modify or delete their own Amazon CloudFront distributions Requests are signed with an HMAC SHA1 signature calculated fr om the request and the user’s private key Additionally the Amazon CloudFront control API is only accessible via SSL enabled endpoints There is no guarantee of durability of data held in Amazon CloudFront edge locations The service may from time to time remove objects from edge locations if those objects are not requested frequently Durability is provided by Amazon S3 which works as the origin server for Amazon CloudFront holding the original definitive copies of objects delivered by Amazon CloudFront ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 40 If you want control over who is able to download content from Amazon CloudFront you can enable the service’s private content feature This feature has two components: the first controls how content is delivered from the Amazon CloudFront edge location t o viewers on the Internet The second controls how the Amazon CloudFront edge locations access objects in Amazon S3 CloudFront also supports Geo Restriction which restricts access to your content based on the geographic location of your viewers To contr ol access to the original copies of your objects in Amazon S3 Amazon CloudFront allows you to create one or more “Origin Access Identities” and associate these with your distributions When an Origin Access Identity is associated with an Amazon CloudFront distribution the distribution will use that identity to retrieve objects from Amazon S3 You can then use Amazon S3’s ACL feature which limits access to that Origin Access Identity so the original copy of the object is not publicly readable To control who is able to download objects from Amazon CloudFront edge locations the service uses a signed URL verification system To use this system you first create a public private key pair and upload the public key to your account via the AWS Management Conso le Second you configure your Amazon CloudFront distribution to indicate which accounts you would authorize to sign requests – you can indicate up to five AWS Accounts you trust to sign requests Third as you receive requests you will create policy docum ents indicating the conditions under which you want Amazon CloudFront to serve your content These policy documents can specify the name of the object that is requested the date and time of the request and the source IP (or CIDR range) of the client maki ng the request You then calculate the SHA1 hash of your policy document and sign this using your private key Finally you include both the encoded policy document and the signature as query string parameters when you reference your objects When Amazon C loudFront receives a request it will decode the signature using your public key Amazon CloudFront only serve s requests that have a valid policy document and matching signature Note: Private content is an optional feature that must be enabled when you s et up your CloudFront distribution Content delivered without this feature enabled will be publicly readable Amazon CloudFront provides the option to transfer content over an encrypted connection (HTTPS) By default CloudFront accept s requests over both HTTP and HTTPS protocols However you can also configure CloudFront to require HTTPS for all requests or have CloudFront redirect HTTP requests to HTTPS You can even ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 41 configure CloudFront distributions to allow HTTP for some objects but require HTTPS for other objects Figure 6: Amazon CloudFront encrypted transmission You can configure one or more CloudFront origins to require CloudFront fetch objects from your origin using the protocol that the viewer used to request the object s For example when you use this CloudFront setting and the viewer uses HTTPS to request an object from CloudFront CloudFront also uses HTTPS to forward the request to your origin Amazon CloudFront uses the SSLv3 or TLSv1 protocols and a selection of ci pher suites that includes the Elliptic Curve Diffie Hellman Ephemeral (ECDHE) protocol on connections to both viewers and the origin ECDHE allows SSL/TLS clients to provide Perfect Forward Secrecy which uses session keys that are ephemeral and not stored anywhere This helps prevent the decoding of captured data by unauthorized third parties even if the secret long term key itself is compromised Note: If you're using your own server as your origin and you want to use HTTPS both between viewers and CloudFront and between CloudFront and your origin you must install a valid SSL certificate on the HTTP server that is signed by a third party certificate authority for example VeriSign or DigiCert By default you can deliver content to viewers over HTT PS by using your CloudFront distribution domain name in your URLs; for example https://dxxxxxcloudfrontnet/imagejpg If you want to deliver your content over HTTPS using your own domain name and your own SSL certificate you can use SNI Custom SSL or D edicated IP Custom SSL With Server Name Identification (SNI) Custom SSL CloudFront relies on the SNI extension of the TLS protocol which is supported by most modern web browsers However some users may not be able to access your content ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 42 because some ol der browsers do not support SNI (For a list of supported browsers visit CloudFront FAQs ) With Dedicated IP Custom SSL CloudFront dedicates IP addresses to your SSL certificate at each CloudFront edge location so that CloudFront can associate the incoming requests with the proper SSL certificate Amazon CloudFront access logs contain a comprehensive set of information about requests for content including the object requested the date and time of the request the edge location serving the request the client IP address the referrer and the user agent To enable access logs just specify the name of the Amazon S3 bucket to store the logs in when you configure your Amazon CloudFront distribution AWS Direct Connect Security With AWS Direct Connect you can provision a direct link between your internal network and an AWS region using a high throughput dedicated connection Doing this may help reduce your network costs improve throughput or provid e a more consistent network experience With this dedicated connection in place you can then create virtual interfaces directly to the AWS Cloud (for example to Amazon EC2 and Amazon S3) and Amazon VPC With Direct Connect you bypass internet service providers in your network path You can procure rack space within the facility housing the AWS Direct Connect location and deploy your equipment nearby Once deployed you can connect this equipment to AWS D irect Connect using a cross connect Each AWS Direct Connect location enables connectivity to the geographically nearest AWS region as well as access to other US regions For example you can provision a single connection to any AWS Direct Connect location in the US and use it to access public AWS services in all US Regions and AWS GovCloud (US) Using industry standard 8021q VLANs the dedicated connection can be partitioned into multiple virtual interfaces This allows you to use the same connection to a ccess public resources such as objects stored in Amazon S3 using public IP address space and private resources such as Amazon EC2 instances running within an Amazon VPC using private IP space while maintaining network separation between the public and private environments Amazon Direct Connect requires the use of the Border Gateway Protocol (BGP) with an Autonomous System Number (ASN) To create a virtual interface you use an MD5 cryptographic key for message authorization MD5 creates a keyed hash usin g your ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 43 secret key You can have AWS automatically generate a BGP MD5 key or you can provide your own Storage Services Amazon Web Services provides low cost data storage with high durability and availability AWS offers storage choices for backup archivi ng and disaster recovery as well as block and object storage Amazon Simple Storage Service (Amazon S3) Security Amazon Simple Storage Service ( Amazon S3) allows you to upload and retrieve data at any time from anywhere on the web Amazon S3 stores data as objects within buckets An object can be any kind of file: a text file a photo a video etc When you add a file to Amazon S3 you have the option of including metadata with the file and setting permissions to control access to the file For each buc ket you can control access to the bucket (who can create delete and list objects in the bucket) view access logs for the bucket and its objects and choose the geographical region where Amazon S3 will store the bucket and its contents Data Access Acce ss to data stored in Amazon S3 is restricted by default; only bucket and object owners have access to the Amazon S3 resources they create (note that a bucket/object owner is the AWS Account owner not the user who created the bucket/object) There are mult iple ways to control access to buckets and objects: • Identity and Access Management (IAM) Policies AWS IAM enables organizations with many employees to create and manage multiple users under a single AWS Account IAM policies are attached to the users ena bling centralized control of permissions for users under your AWS Account to access buckets or objects With IAM policies you can only grant users within your own AWS account permission to access your Amazon S3 resources • Access Control Lists (ACLs) Within Amazon S3 you can use ACLs to give read or write access on buckets or objects to groups of users With ACLs you can only grant other AWS accounts (not specific users) access to your Amazon S3 resources ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 44 • Bucket Policies Bucket policies in Amazon S3 ca n be used to add or deny permissions across some or all of the objects within a single bucket Policies can be attached to users groups or Amazon S3 buckets enabling centralized management of permissions With bucket policies you can grant users within your AWS Account or other AWS Accounts access to your Amazon S3 resources Table 3: Types of access control Type of Access Control AWS Account Level Control User Level Control IAM Policies No Yes ACLs Yes No Bucket Policies Yes Yes You can further restrict access to specific resources based on certain conditions For example you can restrict access based on request time (Date Condition) whether the request was sent using SSL (Boolean Conditions) a requester’s IP address (IP Addres s Condition) or based on the requester's client application (String Conditions) To identify these conditions you use policy keys For more information about action specific policy keys available within Amazon S3 see the Amazon Simple Storage Service Developer Guide Amazon S3 also gives developers the option to use query string authentication which allows them to share Amazon S3 objects through URLs that are valid for a predefined period of time Query string authentication is useful for giving HTTP or browser access to resources that would normally require authentication The signature in the query string secures the request Data Transfer For maximum security you ca n securely upload/download data to Amazon S3 via the SSL encrypted endpoints The encrypted endpoints are accessible from both the Internet and from within Amazon EC2 so that data is transferred securely both within AWS and to and from sources outside of AWS Data Storage Amazon S3 provides multiple options for protecting data at rest For customers who prefer to manage their own encryption they can use a client encryption library like the Amazon S3 Encryption Client to encrypt data before uploading to Amazon S3 ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 45 Alternatively you can use Amazon S3 Server Side Encryption (SSE) if you prefer to have Amazon S3 manage the encryption process for you Data is encrypted with a key generated by AWS or with a key you supply depending on your requirements With Amazon S3 SSE you can encrypt data on upload simply by adding an additional request header when writing the object De cryption happens automatically when data is retrieved Note: Metadata which you can include with your object is not encrypted Therefore AWS recommends that customers not place sensitive information in Amazon S3 metadata Amazon S3 SSE uses one of the s trongest block ciphers available – 256bit Advanced Encryption Standard (AES 256) With Amazon S3 SSE every protected object is encrypted with a unique encryption key This object key itself is then encrypted with a regularly rotated master key Amazon S3 SSE provides additional security by storing the encrypted data and encryption keys in different hosts Amazon S3 SSE also makes it possible for you to enforce encryption requirements For example you can create and apply bucket policies that require that only encrypted data can be uploaded to your buckets For long term storage you can automatically archive the contents of your Amazon S3 buckets to AWS’s archival service called Amazon S3 Glacier You can have data transferred at specific intervals to Ama zon S3 Glacier by creating lifecycle rules in Amazon S3 that describe which objects you want to be archived to Amazon S3 Glacier and when As part of your data management strategy you can also specify how long Amazon S3 should wait after the objects are p ut into Amazon S3 to delete them When an object is deleted from Amazon S3 removal of the mapping from the public name to the object starts immediately and is generally processed across the distributed system within several seconds Once the mapping is r emoved there is no remote access to the deleted object The underlying storage area is then reclaimed for use by the system Data Durability and Reliability Amazon S3 is designed to provide 99999999999% durability and 9999% availability of objects over a given year Objects are redundantly stored on multiple devices across multiple facilities in an Amazon S3 region To help provide durability Amazon S3 PUT and COPY operations synchronously store customer data across multiple facilities before returning SUCCESS Once stored Amazon S3 helps maintain the durability of ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 46 the objects by quickly detecting and repairing any lost redundancy Amazon S3 also regularly verifies the integrity of data stored using checksums If corruption is detected it is repaired u sing redundant data In addition Amazon S3 calculates checksums on all network traffic to detect corruption of data packets when storing or retrieving data Amazon S3 provides further protection via Versioning You can use Versioning to preserve retrieve and restore every version of every object stored in an Amazon S3 bucket With Versioning you can easily recover from both unintended user actions and application failures By default requests will retrieve the most recently written version Older versi ons of an object can be retrieved by specifying a version in the request You can further protect versions using Amazon S3 Versioning's MFA Delete feature Once enabled for an Amazon S3 bucket each version deletion request must include the six digit code and serial number from your multi factor authentication device Access Logs An Amazon S3 bucket can be configured to log access to the bucket and objects within it The access log contains details about each access request including request type the requested resource the requestor’s IP and the time and date of the request When logging is enabled for a bucket log records are periodically aggregated into log files and delivered to the specified Amazon S3 bucket Cross Origin Resource Sharing (CORS) AWS customers who use Amazon S3 to host static web pages or store objects used by other web pages can load content securely by configuring an Amazon S3 bucket to explicitly enable cross origin requests Modern browsers use the Same Origin policy to block JavaScript or HTML5 from allowing requests to load content from another site or domain as a way to help ensure that malicious content is not loaded from a less reputable source (such as during cross site scripting attacks) With the Cross Origin Resource S haring (CORS) policy enabled assets such as web fonts and images stored in an Amazon S3 bucket can be safely referenced by external web pages style sheets and HTML5 applications Amazon S3 Glacier Security Like Amazon S3 the Amazon S3 Glacier service p rovides low cost secure and durable storage But where Amazon S3 is designed for rapid retrieval Amazon S3 Glacier is meant to be used as an archival service for data that is not accessed often and for which retrieval times of several hours are suitable ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 47 Amazon S3 Glacier stores files as archives within vaults Archives can be any data such as a photo video or document and can contain one or several files You can store an unlimited number of archives in a single vault and can create up to 1000 vault s per region Each archive can contain up to 40 TB of data Data Upload To transfer data into Amazon S3 Glacier vaults you can upload an archive in a single upload operation or a multipart operation In a single upload operation you can upload archives u p to 4 GB in size However customers can achieve better results using the Multipart Upload API to upload archives greater than 100 MB Using the Multipart Upload API allows you to upload large archives up to about 40000 GB The Multipart Upload API call is designed to improve the upload experience for larger archives; it enables the parts to be uploaded independently in any order and in parallel If a multipart upload fails you only need to upload the failed part again and not the entire archive When you upload data to Amazon S3 Glacier you must compute and supply a tree hash Amazon S3 Glacier checks the hash against the data to help ensure that it has not been altered en route A tree hash is generated by computing a hash for each megabyte sized se gment of the data and then combining the hashes in tree fashion to represent ever growing adjacent segments of the data As an alternate to using the Multipart Upload feature customers with very large uploads to Amazon S3 Glacier may consider using the A WS Snowball service instead to transfer the data AWS Snowball facilitates moving large amounts of data into AWS using portable storage devices for transport AWS transfers your data directly off of storage devices using Amazon’s high speed internal networ k bypassing the Internet You can also set up Amazon S3 to transfer data at specific intervals to Amazon S3 Glacier You can create lifecycle rules in Amazon S3 that describe which objects you want to be archived to Amazon S3 Glacier and when You can als o specify how long Amazon S3 should wait after the objects are put into Amazon S3 to delete them To achieve even greater security you can securely upload/download data to Amazon S3 Glacier via the SSL encrypted endpoints The encrypted endpoints are acce ssible from both the Internet and from within Amazon EC2 so that data is transferred securely both within AWS and to and from sources outside of AWS ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 48 Data Retrieval Retrieving archives from Amazon S3 Glacier requires the initiation of a retrieval job which is generally completed in 3 to 5 hours You can then access the data via HTTP GET requests The data will remain available to you for 24 hours You can retrieve an entire archive or several files from an archive If you want to retrieve only a subset of an archive you can use one retrieval request to specify the range of the archive that contains the files you are interested or you can initiate multiple retrieval requests each with a range for one or more files You can also limit the number of vault inventory items retrieved by filtering on an archive creation date range or by setting a maximum items limit Whichever method you choose when you retrieve portions of your archive you can use the supplied checksum to help ensure the integrity of the file s provided that the range that is retrieved is aligned with the tree hash of the overall archive Data Storage Amazon S3 Glacier automatically encrypts the data using AES 256 and stores it durably in an immutable form Amazon S3 Glacier is designed to prov ide average annual durability of 99999999999% for an archive It stores each archive in multiple facilities and multiple devices Unlike traditional systems which can require laborious data verification and manual repair Amazon S3 Glacier performs regula r systematic data integrity checks and is built to be automatically self healing When an object is deleted from Amazon S3 Glacier removal of the mapping from the public name to the object starts immediately and is generally processed across the distrib uted system within several seconds Once the mapping is removed there is no remote access to the deleted object The underlying storage area is then reclaimed for use by the system Data Access Only your account can access your data in Amazon S3 Glacier To control access to your data in Amazon S3 Glacier you can use AWS IAM to specify which users within your account have rights to operations on a given vault AWS Storage Gateway Security The AWS Storage Gateway service connects your on premises software appliance with cloud based storage to provide seamless and secure integration between your IT environment and the AWS storage infrastructure The service enables you to securely ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 49 upload data to AWS’ scalable reliable and secure Amazon S3 storage service f or cost effective backup and rapid disaster recovery AWS Storage Gateway transparently backs up data off site to Amazon S3 in the form of Amazon EBS snapshots Amazon S3 redundantly stores these snapshots on multiple devices across multiple facilities de tecting and repairing any lost redundancy The Amazon EBS snapshot provides a point intime backup that can be restored on premises or used to instantiate new Amazon EBS volumes Data is stored within a single region that you specify AWS Storage Gateway offers three options: • Gateway Stored Volumes (where the cloud is backup) In this option your volume data is stored locally and then pushed to Amazon S3 where it is stored in redundant encrypted form and made available in the form of Amazon Elastic Block Storage ( Amazon EBS) snapshots When you use this model the on premises storage is primary delivering low latency access to your entire dataset and the cloud storage is the backup • Gateway Cached Volumes (where the cloud is primary) In this option your volume data is stored encrypted in Amazon S3 visible within your enterprise's network via an iSCSI interface Recently accessed data is cached on premises for low latency local access When you use this model the cloud storage is primary b ut you get low latency access to your active working set in the cached volumes on premises • Gateway Virtual Tape Library (VTL) In this option you can configure a Gateway VTL with up to 10 virtual tape drives per gateway 1 media changer and up to 1500 v irtual tape cartridges Each virtual tape drive responds to the SCSI command set so your existing on premises backup applications (either disk to tape or disk todiskto tape) will work without modification No matter which option you choose data is asy nchronously transferred from your on premises storage hardware to AWS over SSL The data is stored encrypted in Amazon S3 using Advanced Encryption Standard (AES) 256 a symmetric key encryption standard using 256 bit encryption keys The AWS Storage Gate way only uploads data that has changed minimizing the amount of data sent over the Internet The AWS Storage Gateway runs as a virtual machine (VM) that you deploy on a host in your data center running VMware ESXi Hypervisor v 41 or v 5 or Microsoft Hype rV (you download the VMware software during the setup process) You can also run within EC2 using a gateway AMI During the installation and configuration process you can ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 50 create up to 12 stored volumes 20 Cached volumes or 1500 virtual tape cartridges per gateway Once installed each gateway will automatically download install and deploy updates and patches This activity takes place during a maintenance window that you can set on a per gateway basis The iSCSI protocol supports authentication betwee n targets and initiators via CHAP (Challenge Handshake Authentication Protocol) CHAP provides protection against maninthemiddle and playback attacks by periodically verifying the identity of an iSCSI initiator as authenticated to access a storage volum e target To set up CHAP you must configure it in both the AWS Storage Gateway console and in the iSCSI initiator software you use to connect to the target After you deploy the AWS Storage Gateway VM you must activate the gateway using the AWS Storage Gateway console The activation process associates your gateway with your AWS Account Once you establish this connection you can manage almost all aspects of your gateway from the console In the activation process you specify the IP address of your gateway name your gateway identify the AWS region in which you want your snapshot backups stored and specify the gateway time zone AWS Snowball Security AWS Snowball is a simple secure method for physically transferring large amounts of data to A mazon S3 EBS or Amazon S3 Glacier storage This service is typically used by customers who have over 100 GB of data and/or slow connection speeds that would result in very slow transfer rates over the Internet With AWS Snowball you prepare a portable s torage device that you ship to a secure AWS facility AWS transfers the data directly off of the storage device using Amazon’s high speed internal network thus bypassing the Internet Conversely data can also be exported from AWS to a portable storage de vice Like all other AWS services the AWS Snowball service requires that you securely identify and authenticate your storage device In this case you will submit a job request to AWS that includes your Amazon S3 bucket Amazon EBS region AWS Access Key ID and return shipping address You then receive a unique identifier for the job a digital signature for authenticating your device and an AWS address to ship the storage device to For Amazon S3 you place the signature file on the root directory of yo ur device For Amazon EBS you tape the signature barcode to the exterior of the device The signature file is used only for authentication and is not uploaded to Amazon S3 or EBS ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 51 For transfers to Amazon S3 you specify the specific buckets to which the d ata should be loaded and ensure that the account doing the loading has write permission for the buckets You should also specify the access control list to be applied to each object loaded to Amazon S3 For transfers to EBS you specify the target region f or the EBS import operation If the storage device is less than or equal to the maximum volume size of 1 TB its contents are loaded directly into an Amazon EBS snapshot If the storage device’s capacity exceeds 1 TB a device image is stored within the sp ecified S3 log bucket You can then create a RAID of Amazon EBS volumes using software such as Logical Volume Manager and copy the image from S3 to this new volume For added protection you can encrypt the data on your device before you ship it to AWS F or Amazon S3 data you can use a PIN code device with hardware encryption or TrueCrypt software to encrypt your data before sending it to AWS For EBS and Amazon S3 Glacier data you can use any encryption method you choose including a PINcode device AW S will decrypt your Amazon S3 data before importing using the PIN code and/or TrueCrypt password you supply in your import manifest AWS uses your PIN to access a PIN code device but does not decrypt software encrypted data for import to Amazon EBS or Ama zon S3 Glacier The following table summarizes your encryption options for each type of import/export job Table 4: Encryption options for import/export jobs Import to Amazon S3 Source Target Result • Files on a device file system • Encrypt data using PIN code device and/or TrueCrypt before shipping device • Objects in an existing Amazon S3 bucket • AWS decrypts the data before performing the import • One object for each file • AWS erases your device after every import job prior to shipping Export from Amazon S3 Source Target Result ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 52 • Objects in one or more Amazon S3 buckets • Provide a PIN code and/or password that AWS will use to encrypt your data • Files on your storage device • AWS formats your device • AWS copies your data to an encrypted file container on your device • One file for each object • AWS encrypts your data prior to shipping • Use PIN code device and/or TrueCrypt to decrypt the files Import to Amazon S3 Glacier Source Target Result • Entire device • Encrypt the data using the encryption method of your choice before shipping • One archive in an existing Amazon S3 Glacier vault • AWS does not decrypt your device • Device image stored as a single archive • AWS erases your device after every import job prior to shipping Import t o Amazon EBS (Device Capacity < 1 TB) Source Target Result • Entire device • Encrypt the data using the encryption method of your choice before shipping • One Amazon EBS snapshot • AWS does not decrypt your device • Device image is stored as a single snapshot • If the device was encrypted the image is encrypted • AWS erases your device after every import job prior to shipping Import to Amazon EBS (Device Capacity > 1 TB) Source Target Result • Entire device • Encrypt the data using the encryption method of your choice before shipping • Multiple objects in an existing Amazon S3 bucket • AWS does not decrypt your device • Device image chunked into series of 1 TB snapshots stored as objects in Amazon S3 bucket specified in manifest file • If the device was encrypted the image is encrypted • AWS erases your device after every import job prior to shipping ArchivedAmazon Web Services Amazo n Web Services: Overview of Security Processes Page 53 After the import is complete AWS Snowball will erase the contents of your storage device to safeguard the data during return shipment AWS overwrites all writable blocks on the storage device with zeroes You will need to repartition and format the device after the wipe If AWS is unable to erase the data on the device it will be scheduled for destruction and our support team will contact yo u using the email address specified in the manifest file you ship with the device When shipping a device internationally the customs option and certain required subfields are required in the manifest file sent to AWS AWS Snowball uses these values to va lidate the inbound shipment and prepare the outbound customs paperwork Two of these options are whether the data on the device is encrypted or not and the encryption software’s classification When shipping encrypted data to or from the United States the encryption software must be classified as 5D992 under the United States Export Administration Regulations Amazon Elastic File System Security Amazon Elastic File System (Amazon EFS) provides simple scalable file storage for use with Amazon EC2 instances in the AWS Cloud With Amazon EFS storage capacity is elastic growing and shrinking automatically as you add and remove files Amazon EFS file systems are distributed across an unconstrained number of storage servers enabling file systems to grow elast ically to petabyte scale and allowing massively parallel access from Amazon EC2 instances to your data Data Access With Amazon EFS you can create a file system mount the file system on an Amazon EC2 instance and then read and write data from to and fr om your file system You can mount an Amazon EFS file system on EC2 instances in your VPC through the Network File System versions 40 and 41 (NFSv4) protocol To access your Amazon EFS file system in a VPC you create one or more mount targets in the VP C A mount target provides an IP address for an NFSv4 endpoint You can then mount an Amazon EFS file system to this end point using its DNS name which will resolve to the IP address of the EFS mount target in the same Availability Zone as your EC2 instan ce You can create one mount target in each Availability Zone in a region If there are multiple subnets in an Availability Zone in your VPC you create a mount target in one of the subnets and all EC2 instances in that Availability Zone share that mount target You ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 54 can also mount an EFS file system on a host in an on premises datacenter using AWS Direct Connect When using Amazon EFS you specify Amazon EC2 security groups for your EC2 instances and security groups for the EFS mount targets associated wit h the file system Security groups act as a firewall and the rules you add define the traffic flow You can authorize inbound/outbound access to your EFS file system by adding rules that allow your EC2 instance to connect to your Amazon EFS file system vi a the mount target using the NFS port After mounting the file system via the mount target you use it like any other POSIX compliant file system Files and directories in an EFS file system support standard Unix style read/write/execute permissions based on the user and group ID asserted by the mounting NFSv41 client For information about NFS level permissions and related considerations see Working with Users Groups and Permissions at the Network File System (NFS) Level All Amazon EFS file systems are owned by an AWS Account You can use IAM policies to grant permissions to other users so that they can perform administrative operations on your file systems including deleting a file system or modifying a mount target’s security groups For more information about EFS permissions see Overview of Managing Access Permissions to Your Amazon EFS Resources Data Durability and Reliability Amazon EFS is designed to be highly durable and highly available All data and metadata is stored across multiple Availability Zones and all service components are designed to be highly availa ble EFS provides strong consistency by synchronously replicating data across Availability Zones with read afterwrite semantics for most file operations Amazon EFS incorporates checksums for all metadata and data throughout the service Using a file sys tem checking process (FSCK) EFS continuously validates a file system's metadata and data integrity Data Sanitization Amazon EFS is designed so that when you delete data from a file system that data will never be served again If your procedures require that all data be wiped via a specific method such as those detailed in DoD 522022 M (“National Industrial Security Program Operating Manual “) or NIST 800 88 (“Guidelines for Media Sanitization”) we recommend that you conduct a specialized wipe procedur e prior to deleting the file system ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 55 Database Services Amazon Web Services provides a number of database solutions for developers and businesses —from managed relational and NoSQL database services to in memory caching as a service and petabyte scale data warehouse service Amazon DynamoDB Security Amazon DynamoDB is a managed NoSQL database service that provides fast and predictable performance with seamless scalability Amazon DynamoDB enables you to offload the administrative burdens of operating and scaling distributed databases to AWS so you don’t have to worry about hardware provisioning setup and configuration replication software patching or cluster scaling You can create a database table that can store and retrieve any amount of data and serve any level of request traffic DynamoDB automatically spreads the data and traffic for the table over a sufficient number of servers to handle the request capacity you specified and the amount of data stored while maintaining consistent fast performance All data items are stored on Solid State Drives (SSDs) and are automatically replicated across multiple availability zones in a region to provide built in high availability and data durability You can set up automatic backups using a sp ecial template in AWS Data Pipeline that was created just for copying DynamoDB tables You can choose full or incremental backups to a table in the same region or a different region You can use the copy for disaster recovery (DR) in the event that an erro r in your code damages the original table or to federate DynamoDB data across regions to support a multi region application To control who can use the DynamoDB resources and API you set up permissions in AWS IAM In addition to controlling access at the resource level with IAM you can also control access at the database level —you can create database level permissions that allow or deny access to items (rows) and attributes (columns) based on the needs of your application These database level permission s are called fine grained access controls and you create them using an IAM policy that specifies under what circumstances a user or application can access a DynamoDB table The IAM policy can restrict access to individual items in a table access to the a ttributes in those items or both at the same time ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 56 Figure 7: Database level permissions You can optionally use web identity federation to control access by application users who are authenticated by Login with Amazon Facebook or Google Web identity federation removes the need for creating individual IAM users; instead users can sign in to an identity provider and then obtain temporary security credentials from AWS Security Token Service (AWS STS) AWS STS returns temporary AW S credentials to the application and allows it to access the specific DynamoDB table In addition to requiring database and user permissions each request to the DynamoDB service must contain a valid HMAC SHA256 signature or the request is rejected The AWS SDKs automatically sign your requests; however if you want to write your own HTTP POST requests you must provide the signature in the header of your request to Amazon DynamoDB To calculate the signature you must request temporary security credential s from the AWS Security Token Service Use the temporary security credentials to sign your requests to Amazon DynamoDB Amazon DynamoDB is accessible via T LS/SSL encrypted endpoints Amazon Relational Database Service (Amazon RDS) Security Amazon RDS allow s you to quickly create a relational database (DB) instance and flexibly scale the associated compute resources and storage capacity to meet application demand Amazon RDS manages the database instance on your behalf by performing backups handling failove r and maintaining the database software Currently Amazon RDS is available for MySQL Oracle Microsoft SQL Server and PostgreSQL database engines Amazon RDS has multiple features that enhance reliability for critical production databases including DB security groups permissions SSL connections automated backups DB snapshots and multi AZ deployments DB instances can also be deployed in an Amazon VPC for additional network isolation ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 57 Access Control When you first create a DB Instance within Amazon RDS you will create a master user account which is used only within the context of Amazon RDS to control access to your DB Instance(s) The master user account is a native database user account that allows you to log on to your DB Instance with all data base privileges You can specify the master user name and password you want associated with each DB Instance when you create the DB Instance Once you have created your DB Instance you can connect to the database using the master user credentials Subsequ ently you can create additional user accounts so that you can restrict who can access your DB Instance You can control Amazon RDS DB Instance access via DB Security Groups which are similar to Amazon EC2 Security Groups but not interchangeable DB Secur ity Groups act like a firewall controlling network access to your DB Instance Database Security Groups default to a “deny all” access mode and customers must specifically authorize network ingress There are two ways of doing this: authorizing a network I P range or authorizing an existing Amazon EC2 Security Group DB Security Groups only allow access to the database server port (all others are blocked) and can be updated without restarting the Amazon RDS DB Instance which allows a customer seamless contr ol of their database access Using AWS IAM you can further control access to your RDS DB instances AWS IAM enables you to control what RDS operations each individual AWS IAM user has permission to call Network Isolation For additional network access con trol you can run your DB Instances in an Amazon VPC Amazon VPC enables you to isolate your DB Instances by specifying the IP range you wish to use and connect to your existing IT infrastructure through industry standard encrypted IPsec VPN Running Amaz on RDS in a VPC enables you to have a DB instance within a private subnet You can also set up a virtual private gateway that extends your corporate network into your VPC and allows access to the RDS DB instance in that VPC Refer to the Amazon VPC User Guide for more details For Multi AZ deployments defining a subnet for all availability zones in a region will allow Amazon RDS to create a new standby in another availability zone should the need arise You can create DB Subnet Groups which are collections of subnets that you may want to designate for your RDS DB Instances in a VPC Each DB Subnet Group should have at least one subnet for every availability zone in a given reg ion In this case when you create a DB Instance in a VPC you select a DB Subnet Group; Amazon RDS then uses that DB Subnet Group and your preferred availability zone to select a subnet and ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 58 an IP address within that subnet Amazon RDS creates and associat es an Elastic Network Interface to your DB Instance with that IP address DB Instances deployed within an Amazon VPC can be accessed from the Internet or from Amazon EC2 Instances outside the VPC via VPN or bastion hosts that you can launch in your public subnet To use a bastion host you will need to set up a public subnet with an EC2 instance that acts as an SSH Bastion This public subnet must have an Internet gateway and routing rules that allow traffic to be directed via the SSH host which must then forward requests to the private IP address of your Amazon RDS DB instance DB Security Groups can be used to help secure DB Instances within an Amazon VPC In addition network traffic entering and exiting each subnet can be allowed or denied via network A CLs All network traffic entering or exiting your Amazon VPC via your IPsec VPN connection can be inspected by your on premises security infrastructure including network firewalls and intrusion detection systems Encryption You can encrypt connections be tween your application and your DB Instance using SSL For MySQL and SQL Server RDS creates an SSL certificate and installs the certificate on the DB instance when the instance is provisioned For MySQL you launch the mysql client using the ssl_ca para meter to reference the public key in order to encrypt connections For SQL Server download the public key and import the certificate into your Windows operating system Oracle RDS uses Oracle native network encryption with a DB instance You simply add the native network encryption option to an option group and associate that option group with the DB instance Once an encrypted connection is established data transferred between the DB Instance and your application will be encrypted during transfer You can also require your DB instance to only accept encrypted connections Amazon RDS supports Transparent Data Encryption (TDE) for SQL Server (SQL Server Enterprise Edition) and Oracle (part of the Oracle Advanced Security option available in Oracle Enterpris e Edition) The TDE feature automatically encrypts data before it is written to storage and automatically decrypts data when it is read from storage Note: SSL support within Amazon RDS is for encrypting the connection between your application and your DB Instance; it should not be relied on for authenticating the DB Instance itself ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 59 While SSL offers security benefits be aware that SSL encryption is a compute intensive operation and will increase the latency of your database connection To learn how SSL works with SQL Server you can read more in the Amazon Relational Database Service User Guide Automated Backups and DB Snapshots Amazon RDS provi des two different methods for backing up and restoring your DB Instance(s): automated backups and database snapshots (DB Snapshots) Turned on by default the automated backup feature of Amazon RDS enables point in time recovery for your DB Instance Amazo n RDS will back up your database and transaction logs and store both for a user specified retention period This allows you to restore your DB Instance to any second during your retention period up to the last 5 minutes Your automatic backup retention pe riod can be configured to up to 35 days During the backup window storage I/O may be suspended while your data is being backed up This I/O suspension typically lasts a few minutes This I/O suspension is avoided with Multi AZ DB deployments since the ba ckup is taken from the standby DB Snapshots are user initiated backups of your DB Instance These full database backups are stored by Amazon RDS until you explicitly delete them You can copy DB snapshots of any size and move them between any of AWS’s pub lic regions or copy the same snapshot to multiple regions simultaneously You can then create a new DB Instance from a DB Snapshot whenever you desire DB Instance Replication Amazon cloud computing resources are housed in highly available data center fac ilities in different regions of the world and each region contains multiple distinct locations called Availability Zones Each Availability Zone is engineered to be isolated from failures in other Availability Zones and to provide inexpensive low latenc y network connectivity to other Availability Zones in the same region To architect for high availability of your Oracle PostgreSQL or MySQL databases you can run your RDS DB instance in several Availability Zones an option called a Multi AZ deployment When you select this option Amazon automatically provisions and maintains a synchronous standby replica of your DB instance in a different Availability Zone The primary DB instance is synchronously replicated across Availability Zones to the standby re plica In the event of DB instance or Availability Zone failure Amazon RDS will automatically failover to the standby so that database operations can resume quickly without administrative intervention ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 60 For customers who use MySQL and need to scale beyond the capacity constraints of a single DB Instance for read heavy database workloads Amazon RDS provides a Read Replica option Once you create a read replica database updates on the source DB instance are replicated to the read replica using MySQL’s nativ e asynchronous replication You can create multiple read replicas for a given source DB instance and distribute your application’s read traffic among them Read replicas can be created with Multi AZ deployments to gain read scaling benefits in addition to the enhanced database write availability and data durability provided by Multi AZ deployments Automatic Software Patching Amazon RDS will make sure that the relational database software powering your deployment stays up todate with the latest patches W hen necessary patches are applied during a maintenance window that you can control You can think of the Amazon RDS maintenance window as an opportunity to control when DB Instance modifications (such as scaling DB Instance class) and software patching oc cur in the event either are requested or required If a “maintenance” event is scheduled for a given week it will be initiated and completed at some point during the 30 minute maintenance window you identify The only maintenance events that require Amaz on RDS to take your DB Instance offline are scale compute operations (which generally take only a few minutes from start to finish) or required software patching Required patching is automatically scheduled only for patches that are security and durabilit y related Such patching occurs infrequently (typically once every few months) and should seldom require more than a fraction of your maintenance window If you do not specify a preferred weekly maintenance window when creating your DB Instance a 30 minut e default value is assigned If you wish to modify when maintenance is performed on your behalf you can do so by modifying your DB Instance in the AWS Management Console or by using the ModifyDBInstance API Each of your DB Instances can have different preferred maintenance windows if you so choose Running your DB Instance as a Multi AZ deployment can further reduce the impact of a maintenance event as Amazon RDS will conduct maintenan ce via the following steps: 1) Perform maintenance on standby 2) Promote standby to primary and 3) Perform maintenance on old primary which becomes the new standby When an Amazon RDS DB Instance deletion API (DeleteDBInstance) is run the DB Instance i s marked for deletion Once the instance no longer indicates ‘deleting’ status it has been removed At this point the instance is no longer accessible and unless a final ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 61 snapshot copy was asked for it cannot be restored and will not be listed by any of t he tools or APIs Event Notification You can receive notifications of a variety of important events that can occur on your RDS instance such as whether the instance was shut down a backup was started a failover occurred the security group was changed or your storage space is low The Amazon RDS service groups events into categories that you can subscribe to so that you can be notified when an event in that category occurs You can subscribe to an event category for a DB instance DB snapshot DB securi ty group or for a DB parameter group RDS events are published via AWS SNS and sent to you as an email or text message For more information about RDS notification event categories refer to the Amazon Relational Database Service User Guide Amazon Redshift Security Amazon Redshift is a petabyte scale SQL data warehouse service that runs on highly optimized and managed AWS compute and storage resources The service ha s been architected to not only scale up or down rapidly but to significantly improve query speeds – even on extremely large datasets To increase performance Redshift uses techniques such as columnar storage data compression and zone maps to reduce the amount of IO needed to perform queries It also has a massively parallel processing (MPP) architecture parallelizing and distributing SQL operations to take advantage of all available resources When you create a Redshift data warehouse you provision a single node or multi node cluster specifying the type and number of nodes that will make up the cluster The node type determines the storage size memory and CPU of each node Each multi node cluster includes a leader node and two or more compute nodes A leader node manages connections parses queries builds execution plans and manages query execution in the compute nodes The compute nodes store data perform computations and run queries as directed by the leader node The leader node of each cluste r is accessible through ODBC and JDBC endpoints using standard PostgreSQL drivers The compute nodes run on a separate isolated network and are never accessed directly After you provision a cluster you can upload your dataset and perform data analysis queries by using common SQL based tools and business intelligence applications ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 62 Cluster Access By default clusters that you create are closed to everyone Amazon Redshift enables you to configure firewall rules (security groups) to control network access to your data warehouse cluster You can also run Redshift inside an Amazon VPC to isolate your data warehouse cluster in your own virtual network and connect it to your existing IT infrastructure using industry standard encrypted IPsec VPN The AWS accoun t that creates the cluster has full access to the cluster Within your AWS account you can use AWS IAM to create user accounts and manage permissions for those accounts By using IAM you can grant different users permission to perform only the cluster op erations that are necessary for their work Like all databases you must grant permission in Redshift at the database level in addition to granting access at the resource level Database users are named user accounts that can connect to a database and are authenticated when they login to Amazon Redshift In Redshift you grant database user permissions on a per cluster basis instead of on a per table basis However a user can see data only in the table rows that were generated by his own activities; ro ws generated by other users are not visible to him The user who creates a database object is its owner By default only a superuser or the owner of an object can query modify or grant permissions on the object For users to use an object you must gran t the necessary permissions to the user or the group that contains the user And only the owner of an object can modify or delete it Data Backups Amazon Redshift distributes your data across all compute nodes in a cluster When you run a cluster with at l east two compute nodes data on each node will always be mirrored on disks on another node reducing the risk of data loss In addition all data written to a node in your cluster is continuously backed up to Amazon S3 using snapshots Redshift stores your snapshots for a user defined period which can be from one to thirty five days You can also take your own snapshots at any time; these snapshots leverage all existing system snapshots and are retained until you explicitly delete them Amazon Redshift con tinuously monitors the health of the cluster and automatically re replicates data from failed drives and replaces nodes as necessary All of this happens without any effort on your part although you may see a slight performance degradation during the re replication process ArchivedAmazon Web Services Amazon We b Services: Overview of Security Processes Page 63 You can use any system or user snapshot to restore your cluster using the AWS Management Console or the Amazon Redshift APIs Your cluster is available as soon as the system metadata has been restored and you can start running queries w hile user data is spooled down in the background Data Encryption When creating a cluster you can choose to encrypt it in order to provide additional protection for your data at rest When you enable encryption in your cluster Amazon Redshift stores all data in user created tables in an encrypted format using hardware accelerated AES 256 block encryption keys This includes all data written to disk as well as any backups Amazon Redshift uses a four tier key based architecture for encryption These keys consist of data encryption keys a database key a cluster key and a master key: • Data encryption keys encrypt data blocks in the cluster Each data block is assigned a randomly generated AES 256 key These keys are encrypted by using the database key for the cluster • The database key encrypts data encryption keys in the cluster The database key is a randomly generated AES 256 key It is stored on disk in a separate network from the Amazon Redshift cluster and passed to the cluster across a secure channe l • The cluster key encrypts the database key for the Amazon Redshift cluster You can use either AWS or a hardware security module (HSM) to store the cluster key HSMs provide direct control of key generation and management and make key management separat e and distinct from the application and the database • The master key encrypts the cluster key if it is stored in AWS The master key encrypts the cluster keyencrypted database key if the cluster key is stored in an HSM You can have Redshift rotate the en cryption keys for your encrypted clusters at any time As part of the rotation process keys are also updated for all of the cluster's automatic and manual snapshots Note: Enabling encryption in your cluster will impact performance even though it is hard ware accelerated Encryption also applies to backups When restoring from an encrypted snapshot the new cluster will be encrypted as well ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 64 To encrypt your table load data files when you upload them to Amazon S3 you can use Amazon S3 server side encryptio n When you load the data from Amazon S3 the COPY command will decrypt the data as it loads the table Database Audit Logging Amazon Redshift logs all SQL operations including connection attempts queries and changes to your database You can access the se logs using SQL queries against system tables or choose to have them downloaded to a secure Amazon S3 bucket You can then use these audit logs to monitor your cluster for security and troubleshooting purposes Automatic Software Patching Amazon Redshift manages all the work of setting up operating and scaling your data warehouse including provisioning capacity monitoring the cluster and applying patches and upgrades to the Amazon Redshift engine Patches are applied only during specified maintenance windows SSL Connections To protect your data in transit within the AWS cloud Amazon Redshift uses hardware accelerated SSL to communicate with Amazon S3 or Amazon DynamoDB for COPY UNLOAD backup and restore operations You can encrypt the connection between your client and the cluster by specifying SSL in the parameter group associated with the cluster To have your clients also authenticate the Redshift server you can install the public key (pem file) for the SSL certificate on your client and use the key to connect to your clusters Amazon Redshift offers the newer stronger cipher suites that use the Elliptic Curve Diffie Hellman Ephemeral protocol ECDHE allows SSL clients to provide Perfect Forward Secrecy between the client and the Redshift clu ster Perfect Forward Secrecy uses session keys that are ephemeral and not stored anywhere which prevents the decoding of captured data by unauthorized third parties even if the secret long term key itself is compromised You do not need to configure any thing in Amazon Redshift to enable ECDHE; if you connect from a SQL client tool that uses ECDHE to encrypt communication between the client and server Amazon Redshift will use the provided cipher list to make the appropriate connection ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 65 Amazon ElastiCache Security Amazon ElastiCache is a web service that makes it easy to set up manage and scale distributed in memory cache environments in the cloud The service improves the performance of web applications by allowing you to retrieve information from a fas t managed in memory caching system instead of relying entirely on slower disk based databases It can be used to significantly improve latency and throughput for many readheavy application workloads (such as social networking gaming media sharing and Q&A portals) or compute intensive workloads (such as a recommendation engine) Caching improves application performance by storing critical pieces of data in memory for low latency access Cached information may include the results of I/O intensive datab ase queries or the results of computationally intensive calculations The Amazon ElastiCache service automates time consuming management tasks for in memory cache environments such as patch management failure detection and recovery It works in conjunction with other Amazon Web Services (such as Amazon EC2 Amazon CloudWatch and Amazon SNS) t o provide a secure high performance and managed in memory cache For example an application running in Amazon EC2 can securely access an Amazon ElastiCache Cluster in the same region with very low latency Using the Amazon ElastiCache service you crea te a Cache Cluster which is a collection of one or more Cache Nodes each running an instance of the Memcached service A Cache Node is a fixed size chunk of secure network attached RAM Each Cache Node runs an instance of the Memcached service and has its own DNS name and port Multiple types of Cache Nodes are supported each with varying amounts of associated memory A Cache Cluster can be set up with a specific number of Cache Nodes and a Cache Parameter Group that controls the properties for each C ache Node All Cache Nodes within a Cache Cluster are designed to be of the same Node Type and have the same parameter and security group settings Amazon ElastiCache allows you to control access to your Cache Clusters using Cache Security Groups A Cache Security Group acts like a firewall controlling network access to your Cache Cluster By default network access is turned off to your Cache Clusters If you want your applications to access your Cache Cluster you must explicitly enable access from hosts in specific EC2 security groups Once ingress rules are configured the same rules apply to all Cache Clusters associated with that Cache Security Group To allow network access to your Cache Cluster create a Cache Security Group and use the Authorize Ca che Security Group Ingress API or CLI command to authorize the desired EC2 security group (which in turn specifies the EC2 instances allowed) IP ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 66 range based access control is currently not enabled for Cache Clusters All clients to a Cache Cluster must be within the EC2 network and authorized via Cache Security Groups ElastiCache for Redis provides backup and restore functionality where you can create a snapshot of your entire Redis cluster as it exists at a specific point in time You can schedule auto matic recurring daily snapshots or you can create a manual snapshot at any time For automatic snapshots you specify a retention period; manual snapshots are retained until you delete them The snapshots are stored in Amazon S3 with high durability and can be used for warm starts backups and archiving Application Services Amazon Web Services offers a variety of managed services to use with your applications including services that provide application streaming queueing push notification email deli very search and transcoding Amazon CloudSearch Security Amazon CloudSearch is a managed service in the cloud that makes it easy to set up manage and scale a search solution for your website Amazon CloudSearch enables you to search large collections o f data such as web pages document files forum posts or product information It enables you to quickly add search capabilities to your website without having to become a search expert or worry about hardware provisioning setup and maintenance As your volume of data and traffic fluctuates Amazon CloudSearch automatically scales to meet your needs An Amazon CloudSearch domain encapsulates a collection of data you want to search the search instances that process your search requests and a configuratio n that controls how your data is indexed and searched You create a separate search domain for each collection of data you want to make searchable For each domain you configure indexing options that describe the fields you want to include in your index a nd how you want to us them text options that define domain specific stopwords stems and synonyms rank expressions that you can use to customize how search results are ranked and access policies that control access to the domain’s document and search endpoints All Amazon CloudSearch configuration requests must be authenticated using standard AWS authentication ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 67 Amazon CloudSearch provides separate endpoints for accessing the configuration search and document services: • The configuration service is acc essed through a general endpoint: cloudsearchus east1amazonawscom • The document service endpoint is used to submit documents to the domain for indexing and is accessed through a domain specific endpoint: http://doc domainname domainidus east 1cloudsearchamazonawscom/ • The search endpoint is used to submit search requests to the domain and is accessed through a domain specific endpoint: http://search domain name domainidus east1cloudsearchamazonawscom Like all AWS Services Amazon CloudSearch requires that every request made to its control API be authenticated so only authenticated users can access and manage your CloudSearch domain API requests are sig ned with an HMAC SHA1 or HMAC SHA256 signature calculated from the request and the user’s AWS Secret Access key Additionally the Amazon CloudSearch control API is accessible via SSL encrypted endpoints You can control access to Amazon CloudSearch manag ement functions by creating users under your AWS Account using AWS IAM and controlling which CloudSearch operations these users have permission to perform Amazon Simple Queue Service (Amazon SQS) Security Amazon SQS is a highly reliable scalable message queuing service that enables asynchronous message based communication between distributed components of an application The components can be computers or Amazon EC2 instances or a combination of both With Amazon SQS you can send any number of messages to an Amazon SQS queue at any time from any component The messages can be retrieved from the same component or a different one right away or at a later time (within 4 days) Messages are highly durable; each message is persistently stored in highly availa ble highly reliable queues Multiple processes can read/write from/to an Amazon SQS queue at the same time without interfering with each other Amazon SQS access is granted based on an AWS Account or a user created with AWS IAM Once authenticated the AW S Account has full access to all user operations An AWS IAM user however only has access to the operations and queues for which they have been granted access via policy By default access to each individual queue is restricted to the AWS Account that c reated it However you can allow other access to a queue using either an SQS generated policy or a policy you write ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 68 Amazon SQS is accessible via SSL encrypted endpoints The encrypted endpoints are accessible from both the Internet and from within Amazo n EC2 Data stored within Amazon SQS is not encrypted by AWS; however the user can encrypt data before it is uploaded to Amazon SQS provided that the application utilizing the queue has a means to decrypt the message when retrieved Encrypting messages before sending them to Amazon SQS helps protect against access to sensitive customer data by unauthorized persons including AWS Amazon Simple Notification Service (Amazon SNS) Security Amazon Simple Notification Service (Amazon SNS) is a web service that makes it easy to set up operate and send notifications from the cloud It provides developers with a highly scalable flexible and cost effective capability to publish messages from an application and immediately deliver them to subscribers or other app lications Amazon SNS provides a simple web services interface that can be used to create topics that customers want to notify applications (or people) about subscribe clients to these topics publish messages and have these messages delivered over clien ts’ protocol of choice (ie HTTP/HTTPS email etc) Amazon SNS delivers notifications to clients using a “push” mechanism that eliminates the need to periodically check or “poll” for new information and updates Amazon SNS can be leveraged to build hig hly reliable event driven workflows and messaging applications without the need for complex middleware and application management The potential uses for Amazon SNS include monitoring applications workflow systems timesensitive information updates mob ile applications and many others Amazon SNS provides access control mechanisms so that topics and messages are secured against unauthorized access Topic owners can set policies for a topic that restrict who can publish or subscribe to a topic Additiona lly topic owners can encrypt transmission by specifying that the delivery mechanism must be HTTPS Amazon SNS access is granted based on an AWS Account or a user created with AWS IAM Once authenticated the AWS Account has full access to all user operations An AWS IAM user however only has access to the operations and topics for which they have been granted access via policy By default access to each individual topic is restricted to the AWS Account that created it However you can allow othe r access to SNS using either an SNS generated policy or a policy you write ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 69 Amazon Simple Workflow Service (Amazon SWF) Security The Amazon Simple Workflow Service ( Amazon SWF) makes it easy to build applications that coordinate work across distributed co mponents Using Amazon SWF you can structure the various processing steps in an application as “tasks” that drive work in distributed applications and Amazon SWF coordinates these tasks in a reliable and scalable manner Amazon SWF manages task execution dependencies scheduling and concurrency based on a developer’s application logic The service stores tasks dispatches them to application components tracks their progress and keeps their latest state Amazon SWF provides simple API calls that can be executed from code written in any language and run on your EC2 instances or any of your machines located anywhere in the world that can access the Internet Amazon SWF acts as a coordination hub with which your application hosts interact You create desir ed workflows with their associated tasks and any conditional logic you wish to apply and store them with Amazon SWF Amazon SWF access is granted based on an AWS Account or a user created with AWS IAM All actors that participate in the execution of a work flow— deciders activity workers workflow administrators —must be IAM users under the AWS Account that owns the Amazon SWF resources You cannot grant users associated with other AWS Accounts access to your Amazon SWF workflows An AWS IAM user however o nly has access to the workflows and resources for which they have been granted access via policy Amazon Simple Email Service (Amazon SES) Security Amazon Simple Email Service (SES) built on Amazon’s reliable and scalable infrastructure is a mail service that can both send and receive mail on behalf of your domain Amazon SES helps you maximize email deliverability and stay informed of the delivery status of your emails Amazon SES integrates with other AWS services making it easy to send emails from app lications being hosted on services such as Amazon EC2 Unfortunately with other email systems it's possible for a spammer to falsify an email header and spoof the originating email address so that it appears as though the email originated from a differen t source To mitigate these problems Amazon SES requires users to verify their email address or domain in order to confirm that they own it and to prevent others from using it To verify a domain Amazon SES requires the sender to publish a DNS record tha t Amazon SES supplies as proof of control over the domain ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 70 Amazon SES periodically reviews domain verification status and revokes verification in cases where it is no longer valid Amazon SES takes proactive steps to prevent questionable content from bein g sent so that ISPs receive consistently high quality email from our domains and therefore view Amazon SES as a trusted email origin Below are some of the features that maximize deliverability and dependability for all of our senders: • Amazon SES uses con tentfiltering technologies to help detect and block messages containing viruses or malware before they can be sent • Amazon SES maintains complaint feedback loops with major ISPs Complaint feedback loops indicate which emails a recipient marked as spam A mazon SES provides you access to these delivery metrics to help guide your sending strategy • Amazon SES uses a variety of techniques to measure the quality of each user’s sending These mechanisms help identify and disable attempts to use Amazon SES for un solicited mail and detect other sending patterns that would harm Amazon SES’s reputation with ISPs mailbox providers and anti spam services • Amazon SES supports authentication mechanisms such as Sender Policy Framework (SPF) and DomainKeys Identified Ma il (DKIM) When you authenticate an email you provide evidence to ISPs that you own the domain Amazon SES makes it easy for you to authenticate your emails If you configure your account to use Easy DKIM Amazon SES will DKIM sign your emails on your beh alf so you can focus on other aspects of your email sending strategy To ensure optimal deliverability we recommend that you authenticate your emails As with other AWS services you use security credentials to verify who you are and whether you have per mission to interact with Amazon SES For information about which credentials to use see Using Credentials with Amazon SES Amazon SES also integrates with AWS IAM so that you can specify which Amazon SES API actions a user can perform If you choose to co mmunicate with Amazon SES through its SMTP interface you are required to encrypt your connection using TLS Amazon SES supports two mechanisms for establishing a TLS encrypted connection: STARTTLS and TLS Wrapper If you choose to communicate with Amazon SES over HTTP then all communication will be protected by TLS through Amazon SES’s HTTPS endpoint When delivering email to its ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 71 final destination Amazon SES encrypts the email content with opportunistic TLS if supported by the receiver Amazon Elastic T ranscoder Service Security The Amazon Elastic Transcoder service simplifies and automates what is usually a complex process of converting media files from one format size or quality to another The Elastic Transcoder service converts standard definition (SD) or high definition (HD) video files as well as audio files It reads input from an Amazon S3 bucket transcodes it and writes the resulting file to another Amazon S3 bucket You can use the same bucket for input and output and the buckets can be in any AWS region The Elastic Transcoder accepts input files in a wide variety of web consumer and professional formats Output file types include the MP3 MP4 OGG TS WebM HLS using MPEG 2 TS and Smooth Streaming using fmp4 container types storing H 264 or VP8 video and AAC MP3 or Vorbis audio You'll start with one or more input files and create transcoding jobs in a type of workflow called a transcoding pipeline for each file When you create the pipeline you'll specify input and output buckets as well as an IAM role Each job must reference a media conversion template called a transcoding preset and will result in the generation of one or more output files A preset tells the Elastic Transcoder what settings to use when processing a particular input file You can specify many settings when you create a preset including the sample rate bit rate resolution (output height and width) the number of reference and keyframes a video bit rate some thumbnail creation options etc A best effort is m ade to start jobs in the order in which they’re submitted but this is not a hard guarantee and jobs typically finish out of order since they are worked on in parallel and vary in complexity You can pause and resume any of your pipelines if necessary Elastic Transcoder supports the use of SNS notifications when it starts and finishes each job and when it needs to tell you that it has detected an error or warning condition The SNS notification parameters are associated with each pipeline It can also use the List Jobs by Status function to find all of the jobs with a given status (eg "Completed") or the Read Job function to retrieve detailed information about a particular job Like all other AWS services Elastic Transcoder integrates with AWS Identity and Access Management (IAM) which allows you to control access to the service and to other AWS resources that Elastic Transcoder requires including Amazon S3 buckets ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 72 and Amazon SNS topics By default IAM users have no access to Elastic Transcoder or to the resources that it uses If you want IAM users to be able to work with Elastic Transcoder you must explicitly grant them permissions Amazon Elastic Transcoder requires every request made to its control API be authenticated so only authenticated proce sses or users can create modify or delete their own Amazon Transcoder pipelines and presets Requests are signed with an HMAC SHA256 signature calculated from the request and a key derived from the user’s secret key Additionally the Amazon Elastic Tran scoder API is only accessible via SSL encrypted endpoints Durability is provided by Amazon S3 where media files are redundantly stored on multiple devices across multiple facilities in an Amazon S3 region For added protection against users accidently de leting media files you can use the Versioning feature in Amazon S3 to preserve retrieve and restore every version of every object stored in an Amazon S3 bucket You can further protect versions using Amazon S3 Versioning's MFA Delete feature Once enabl ed for an Amazon S3 bucket each version deletion request must include the six digit code and serial number from your multi factor authentication device Amazon AppStream 20 Security The Amazon AppStream 20 service provides a framework for running stream ing applications particularly applications that require lightweight clients running on mobile devices It enables you to store and run your application on powerful parallel processing GPUs in the cloud and then stream input and output to any client devic e This can be a pre existing application that you modify to work with Amazon AppStream 20 or a new application that you design specifically to work with the service The Amazon AppStream 20 SDK simplifies the development of interactive streaming applications and client applications The SDK provides APIs that connect your customers’ devices directly to your application capture and encode audio and video stream content across the Internet i n near real time decode content on client devices and return user input to the application Because your application's processing occurs in the cloud it can scale to handle extremely large computational loads Amazon AppStream 20 deploys streaming appl ications on Amazon EC2 When you add a streaming application through the AWS Management Console the service creates the AMI required to host your application and makes your application available ArchivedAmazon Web Services Amazon Web Se rvices: Overview of Security Processes Page 73 to streaming clients The service scales your application as needed within the capacity limits you have set to meet demand Clients using the Amazon AppStream 20 SDK automatically connect to your streamed application In most cases you’ll want to ensure that the user running the client is authorized to use your a pplication before letting him obtain a session ID We recommend that you use some sort of entitlement service which is a service that authenticates clients and authorizes their connection to your application In this case the entitlement service will also call into the Amazon AppStream 20 REST API to create a new streaming session for the client After the entitlement service creates a new session it returns the session identifier to the authorized client as a single use entitlement URL The client then uses the entitlement URL to connect to the application Your entitlement service can be hosted on an Amazon EC2 instance or on AWS Elastic Beanstalk Amazon AppStream 20 utilizes an AWS CloudForm ation template that automates the process of deploying a GPU EC2 instance that has the AppStream 20 Windows Application and Windows Client SDK libraries installed; is configured for SSH RDC or VPN access; and has an elastic IP address assigned to it By using this template to deploy your standalone streaming server all you need to do is upload your application to the server and run the command to launch it You can then use the Amazon AppStream 20 Service Simulator tool to test your application in stan dalone mode before deploying it into production Amazon AppStream 20 also utilizes the STX Protocol to manage the streaming of your application from AWS to local devices The Amazon AppStream 20 STX Protocol is a proprietary protocol used to stream high quality application video over varying network conditions; it monitors network conditions and automatically adapts the video stream to provide a low latency and high resolution experience to your customers It minimizes latency while syncing audio and vid eo as well as capturing input from your customers to be sent back to the application running in AWS Analytics Services Amazon Web Services provides cloud based analytics services to help you process and analyze any volume of data whether your need is for managed Hadoop clusters real time streaming data petabyte scale data warehousing or orchestration ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 74 Amazon EMR Security Amazon EMR is a managed web service you can use to run Hadoop clusters that process vast amounts of data by distributing the work and data among several servers It utilizes an enhanced version of the Apache Hadoop framework running on the web scale infrastructure of Amazon EC2 and Amazon S3 You simply upload your input data and a data processing application into Amazon S3 Amazon EMR then launches the number of Amazon EC2 instances you specify The service begins the job flow execution while pulling the input data from Amazon S3 into the launched Amazon EC2 instances Once the job flow is finished Amazon EMR transfers the output data to Amazon S3 where you can then retrieve it or use it as input in another job flow When launching job flows on your behalf Amazon EMR sets up two Amazon EC2 security groups: one for the master nodes and another for the slaves The master security group has a port open for communication with the service It also has the SSH port open to allow you to SSH into the instances using the key specified at startup The slaves start in a separate security group which only allows interaction with the master insta nce By default both security groups are set up to not allow access from external sources including Amazon EC2 instances belonging to other customers Since these are security groups within your account you can reconfigure them using the standard EC2 to ols or dashboard To protect customer input and output datasets Amazon EMR transfers data to and from Amazon S3 using SSL Amazon EMR provides several ways to control access to the resources of your cluster You can use AWS IAM to create user accounts and roles and configure permissions that control which AWS features those users and roles can access When you launch a cluster you can associate an Amazon EC2 key pair with the cluster which you can then use when you connect to the cluster using SSH You c an also set permissions that allow users other than the default Hadoop user to submit jobs to your cluster By default if an IAM user launches a cluster that cluster is hidden from other IAM users on the AWS account This filtering occurs on all Amazon E MR interfaces —the console CLI API and SDKs —and helps prevent IAM users from accessing and inadvertently changing clusters created by other IAM users It is useful for clusters that are intended to be viewed by only a single IAM user and the main AWS acc ount You also have the option to make a cluster visible and accessible to all IAM users under a single AWS account For an additional layer of protection you can launch the EC2 instances of your EMR cluster into an Amazon VPC which is like launching it into a private subnet This allows ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 75 you to control access to the entire subnetwork You can also launch the cluster into a VPC and enable the cluster to access resources on your internal network using a VPN connection You can encrypt the input data before you upload it to Amazon S3 using any common data encryption tool If you do encrypt the data before it’s uploaded you then need to add a decryption step to the beginning of your job flow when Amazon Elastic MapReduce fetches the data from Amazon S3 Amazo n Kinesis Security Amazon Kinesis is a managed service designed to handle real time streaming of big data It can accept any amount of data from any number of sources scaling up and down as needed You can use Kinesis in situations that call for large scale real time data ingestion and processing such as server logs social media or market data feeds and web clickstream data Applications read and write data records to Amazon Kinesis in streams You can create any number of Kinesis streams to capture store and transport data Amazon Kinesis automatically manages the infrastructure storage networking and configuration needed to collect and process your data at the level of throughput your streaming applications need You don’t have to worry about pr ovisioning deployment or ongoing maintenance of hardware software or other services to enable real time capture and storage of large scale data Amazon Kinesis also synchronously replicates data across three facilities in an AWS Region providing high availability and data durability In Amazon Kinesis data records contain a sequence number a partition key and a data blob which is an un interpreted immutable sequence of bytes The Amazon Kinesis service does not inspect interpret or change the da ta in the blob in any way Data records are accessible for only 24 hours from the time they are added to an Amazon Kinesis stream and then they are automatically discarded Your application is a consumer of an Amazon Kinesis stream which typically runs o n a fleet of Amazon EC2 instances A Kinesis application uses the Amazon Kinesis Client Library to read from the Amazon Kinesis stream The Kinesis Client Library takes care of a variety of details for you including failover recovery and load balancing allowing your application to focus on processing the data as it becomes available After processing the record your consumer code can pass it along to another Kinesis stream; write it to an Amazon S3 bucket a Redshift data warehouse or a DynamoDB table; or simply discard it A connector library is available to help you integrate Kinesis with other ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 76 AWS services (such as DynamoDB Redshift and Amazon S3) as well as third party products like Apache Storm You can control logical access to Kinesis resources and management functions by creating users under your AWS Account using AWS IAM and controlling which Kinesis operations these users have permission to perform To facilitate running your producer or consumer applications on an Amazon EC2 instance you c an configure that instance with an IAM role That way AWS credentials that reflect the permissions associated with the IAM role are made available to applications on the instance which means you don’t have to use your long term AWS security credentials Roles have the added benefit of providing temporary credentials that expire within a short timeframe which adds an additional measure of protection See the AWS Ident ity and Access Management User Guide for more information about IAM roles The Amazon Kinesis API is only accessible via an SSL encrypted endpoint (kinesisus east1amazonawscom) to help ensure secure transmission of your data to AWS You must connect to that endpoint to access Kinesis but you can then use the API to direct AWS Kinesis to create a stream in any AWS Region AWS Data Pipeline Security The AWS Data Pipeline service helps you process and move data between different data sources at specified intervals using data driven workflows and built in dependency checking When you create a pipeline you define data sources preconditions destinations processing steps and an operational schedule Once you define and activate a pip eline it will run automatically according to the schedule you specified With AWS Data Pipeline you don’t have to worry about checking resource availability managing inter task dependencies retrying transient failures/timeouts in individual tasks or c reating a failure notification system AWS Data Pipeline takes care of launching the AWS services and resources your pipeline needs to process your data (eg Amazon EC2 or EMR) and transferring the results to storage (eg Amazon S3 RDS DynamoDB or E MR) When you use the console AWS Data Pipeline creates the necessary IAM roles and policies including a trusted entities list for you IAM roles determine what your pipeline can access and the actions it can perform Additionally when your pipeline cre ates a resource such as an EC2 instance IAM roles determine the EC2 instance's permitted resources and actions When you create a pipeline you specify one IAM role that governs your pipeline and another IAM role to govern your pipeline's resources (refe rred to as a "resource role") which can be the same role for both As part of the security best ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 77 practice of least privilege we recommend that you consider the minimum permissions necessary for your pipeline to perform work and define the IAM roles accord ingly Like most AWS services AWS Data Pipeline also provides the option of secure (HTTPS) endpoints for access via SSL Deployment and Management Services Amazon Web Services provides a variety of tools to help with the deployment and management of your applications This includes services that allow you to create individual user accounts with credentials for access to AWS services It also includes services for creating and updating stacks of AWS resources deploying applications on those resources and monitoring the health of those AWS resources Other tools help you manage cryptographic keys using hardware security modules (HSMs) and log AWS API activity for security and compliance purposes AWS Identity and Access Management (IAM) IAM allows you to create multiple users and manage the permissions for each of these users within your AWS Account A user is an identity (within an AWS Account) with unique security credentials that can be used to access AWS Service s IAM eliminates the need to share passwords or keys and makes it easy to enable or disable a user’s access as appropriate IAM enables you to implement security best practices such as least privilege by granting unique credentials to every user within your AWS Account and only granting permission to access the AWS services and resources required for the users to perform their jobs IAM is secure by default; new users have no access to AWS until permissions are explicitly granted IAM is also integrated with the AWS Marketplace so that you can control who in your organization can subscribe to the software and services offered in the Marketplace Since subscribing to certain software in the Marketplace launches an EC2 instance to run the software this i s an important access control feature Using IAM to control access to the AWS Marketplace also enables AWS Account owners to have fine grained control over usage and software costs IAM enables you to minimize the use of your AWS Account credentials Once you create IAM user accounts all interactions with AWS Services and resources should occur with IAM user security credentials ArchivedAmazon Web Services Amazon Web Serv ices: Overview of Security Processes Page 78 Roles An IAM role uses temporary security credentials to allow you to delegate access to users or services that normally don't have access to your AWS resources A role is a set of permissions to access specific AWS resources but these permissions are not tied to a specific IAM user or group An authorized entity (eg mobile user EC2 instance) assumes a role and receives tempo rary security credentials for authenticating to the resources defined in the role Temporary security credentials provide enhanced security due to their short life span (the default expiration is 12 hours) and the fact that they cannot be reused after the y expire This can be particularly useful in providing limited controlled access in certain situations: • Federated (non AWS) User Access Federated users are users (or applications) who do not have AWS Accounts With roles you can give them access to your AWS resources for a limited amount of time This is useful if you have non AWS users that you can authenticate with an external service such as Microsoft Active Directory LDAP or Kerberos The temporary AWS credentials used with the roles provide ident ity federation between AWS and your non AWS users in your corporate identity and authorization system If your organization supports SAML 20 (Security Assertion Markup Language 20) you can create trust between your organization as an identity provider ( IdP) and other organizations as service providers In AWS you can configure AWS as the service provider and use SAML to provide your users with federated single sign on (SSO) to the AWS Management Console or to get federated access to call AWS APIs Roles are also useful if you create a mobile or web based application that accesses AWS resources AWS resources require security credentials for programmatic requests; however you shouldn't embed long term security credentials in your application because they are accessible to the application's users and can be difficult to rotate Instead you can let users sign in to your application using Login with Amazon Facebook or Google and then use their authentication information to assume a role and get temporary security credentials ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 79 • Cross Account Access For organizations who use multiple AWS Accounts to manage their resources you can set up roles to provide users who have permissions in one account to access resources under another account For organizations w ho have personnel who only rarely need access to resources under another account using roles helps ensures that credentials are provided temporarily only as needed • Applications Running on EC2 Instances that Need to Access AWS Resources If an applicatio n runs on an Amazon EC2 instance and needs to make requests for AWS resources such as Amazon S3 buckets or a DynamoDB table it must have security credentials Using roles instead of creating individual IAM accounts for each application on each instance ca n save significant time for customers who manage a large number of instances or an elastically scaling fleet using AWS Auto Scaling The temporary credentials include a security token an Access Key ID and a Secret Access Key To give a user access to cer tain resources you distribute the temporary security credentials to the user you are granting temporary access to When the user makes calls to your resources the user passes in the token and Access Key ID and signs the request with the Secret Access Ke y The token will not work with different access keys How the user passes in the token depends on the API and version of the AWS product the user is making calls to For more information about temporary security credentials see AWS Security Token Service API Reference The use of temporary credentials means additional protection for you because you don’t have to manage or distribute long term credentials to temporary users I n addition the temporary credentials get automatically loaded to the target instance so you don’t have to embed them somewhere unsafe like your code Temporary credentials are automatically rotated or changed multiple times a day without any action on you r part and are stored securely by default For m ore information about using IAM roles to auto provision keys on EC2 instances see the AWS Identity and Access Management Documentation Amazon CloudWatch Security Amazon CloudWatch is a web service that provides monitoring for AWS cloud resources starting with Amazon EC2 It provides customers with visibility into resource utilization operational performance and overall demand patterns —includi ng metrics ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 80 such as CPU utilization disk reads and writes and network traffic You can set up CloudWatch alarms to notify you if certain thresholds are crossed or to take other automated actions such as adding or removing EC2 instances if Auto Scaling is enabled CloudWatch captures and summarizes utilization metrics natively for AWS resources but you can also have other logs sent to CloudWatch to monitor You can route your guest OS application and custom log files for the software installed on your E C2 instances to CloudWatch where they will be stored in durable fashion for as long as you'd like You can configure CloudWatch to monitor the incoming log entries for any desired symbols or messages and to surface the results as CloudWatch metrics You could for example monitor your web server's log files for 404 errors to detect bad inbound links or invalid user messages to detect unauthorized login attempts to your guest OS Like all AWS Services Amazon CloudWatch requires that every request made to its control API be authenticated so only authenticated users can access and manage CloudWatch Requests are signed with an HMAC SHA1 signature calculated from the request and the user’s private key Additionally the Amazon CloudWatch control API is only a ccessible via SSL encrypted endpoints You can further control access to Amazon CloudWatch by creating users under your AWS Account using AWS IAM and controlling what CloudWatch operations these users have permission to call AWS CloudHSM Security The AW S CloudHSM service provides customers with dedicated access to a hardware security module (HSM) appliance designed to provide secure cryptographic key storage and operations within an intrusion resistant tamper evident device You can generate store an d manage the cryptographic keys used for data encryption so that they are accessible only by you AWS CloudHSM appliances are designed to securely store and process cryptographic key material for a wide variety of uses such as database encryption Digital Rights Management (DRM) Public Key Infrastructure (PKI) authentication and authorization document signing and transaction processing They support some of the strongest cryptographic algorithms available including AES RSA and ECC and many others The AWS CloudHSM service is designed to be used with Amazon EC2 and VPC providing the appliance with its own private IP within a private subnet You can connect to CloudHSM appliances from your EC2 servers through SSL/TLS which uses two way ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 81 digital certif icate authentication and 256 bit SSL encryption to provide a secure communication channel Selecting CloudHSM service in the same region as your EC2 instance decreases network latency which can improve your application performance You can configure a client on your EC2 instance that allows your applications to use the APIs provided by the HSM including PKCS#11 MS CAPI and Java JCA/JCE (Java Cryptography Architecture/Java Cryptography Extensions) Before you begin using an HSM you must set up at least o ne partition on the appliance A cryptographic partition is a logical and physical security boundary that restricts access to your keys so only you control your keys and the operations performed by the HSM AWS has administrative credentials to the applia nce but these credentials can only be used to manage the appliance not the HSM partitions on the appliance AWS uses these credentials to monitor and maintain the health and availability of the appliance AWS cannot extract your keys nor can AWS cause th e appliance to perform any cryptographic operation using your keys The HSM appliance has both physical and logical tamper detection and response mechanisms that erase the cryptographic key material and generate event logs if tampering is detected The HSM is designed to detect tampering if the physical barrier of the HSM appliance is breached In addition after three unsuccessful attempts to access an HSM partition with HSM Admin credentials the HSM appliance erases its HSM partitions When your CloudHSM subscription ends and you have confirmed that the contents of the HSM are no longer needed you must delete each partition and its contents as well as any logs As part of the decommissioning process AWS zeroizes the appliance permanently erasing all ke y material AWS CloudTrail Security AWS CloudTrail provides a log of user and system actions affecting AWS resources within your account For each event recorded you can see what service was accessed what action was performed any parameters for the acti on and who made the request For mutating actions you can see the result of the action Not only can you see which one of your users or services performed an action on an AWS service but you can see whether it was as the AWS root account user or an IAM user or whether it was with temporary security credentials for a role or federated user ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 82 CloudTrail captures information about API calls to an AWS resource whether that call was made from the AWS Management Console CLI or an SDK If the API request returned an error CloudTrail provides the description of the error including messages for authorization failures It even captures AWS Management Console sign in events creating a log record every time an AWS account owner a federated user or an IAM user simply signs into the console Once you have enabled CloudTrail event logs are delivered about every 5 minutes to the Amazon S3 bucket of your choice The log files are organized by AWS Account ID region service name date and time You can configure CloudTrail so that it aggregates log files from multiple regions and/or accounts into a single Amazon S3 bucket By default a single trail will record and deliver events in all current and future regions In addition to S3 you can send events to CloudWatch Logs for custom metrics and alarming or you can upload the logs to your favorite log management and analysis solutions to perform security analysis and detect user behavior patterns For rapid response you can create CloudWatch Events rules to take immediate action to specific events By default log files are stored indefinitely The log files are automatically encrypted using Amazon S3's Server Side Encryption and will remain in the bucket until you choose to delete or archive them For even more security you can use KMS to encrypt the log files using a key that you own You can use Amazon S3 lifecycle configuration rules to automatically delete old log files or archive them to Amazon S3 Glacier for additional longevity at significant savings By enabling the optional log file validation you can validate that logs have not been added deleted or tampered with Like every other AWS service you can limit access to CloudTrail to only certain users You can use IAM to control which AWS users can create configure or delete AWS CloudTrail trails as well as which users can start and stop logging You can control access to the log files by applying I AM or Amazon S3 bucket policies You can also add an additional layer of security by enabling MFA Delete on your Amazon S3 bucket Mobile Services AWS mobile services make it easier for you to build ship run monitor optimize and scale cloud powered applications for mobile devices These services also help you authenticate users to your mobile application synchronize data and collect and analyze application usage ArchivedAmazon Web Services Amazon Web Servic es: Overview of Security Processes Page 83 Amazon Cognito Amazon Cognito provides identity and sync services for mobile and web based applications It simplifies the task of authent icating users and storing managing and syncing their data across multiple devices platforms and applications It provides temporary limited privilege credentials for both authenticated and unauthenticated users without having to manage any backend inf rastructure Amazon Cognito works with well known identity providers like Google Facebook and Amazon to authenticate end users of your mobile and web applications You can take advantage of the identification and authorization features provided by these services instead of having to build and maintain your own Your application authenticates with one of these identity providers using the provider’s SDK Once the end user is authenticated with the provider an OAuth or OpenID Connect token returned from th e provider is passed by your application to Cognito which returns a new Amazon Cognito ID for the user and a set of temporary limited privilege AWS credentials To begin using Amazon Cognito you create an identity pool through the Amazon Cognito console The identity pool is a store of user identity information that is specific to your AWS account During the creation of the identity pool you will be asked to create a new IAM role or pic k an existing one for your end users An IAM role is a set of permissions to access specific AWS resources but these permissions are not tied to a specific IAM user or group An authorized entity (eg mobile user EC2 instance) assumes a role and receiv es temporary security credentials for authenticating to the AWS resources defined in the role Temporary security credentials provide enhanced security due to their short life span (the default expiration is 12 hours) and the fact that they cannot be reuse d after they expire The role you select has an impact on which AWS services your end users will be able to access with the temporary credentials By default Amazon Cognito creates a new role with limited permissions – end users only have access to the Amazon Cognito Sync service and Amazon Mobile Analytics If your application needs access to other AWS resources such as Amazon S3 or DynamoDB you can modify your roles directly from the IAM management console With Amazon Cognito there’s no need to create individual AWS accounts or even IAM accounts for every one of your web/mobile app’s end users who will need to access your AWS resources In conjunction with IAM roles mobile users can securely access AWS resources and application features and even save data to the AWS cloud without having to create an account or log in However if they choose to do this later Amazon Cognito merge s data and identification information Because Amazon Cognito stores data locally as well as in the service your ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 84 end users can continue to interact with their data even when they are offline Their offline data may be stale but anything they put into the dataset they can immediately retrieve whether they are online or not The client SDK manages a local SQLite store so that the application can work even when it is not connected The SQLite store functions as a cache and is the target of all read and write operations Cognito's sync facility compares the local version of the data to the cloud version and pushes up or pulls down deltas as needed Note that in order to sync data across devices your identity pool must support authenticated identities Unauthenticated identities are tied to the device so unless an end user authenticates no data can be synced across multiple devices With Amazon Cognito your application communicates directly with a supported public identity provider (Amazon Facebook or Google) to authenticate users Amazon Cognito does not receive or store user credentials —only the OAuth or OpenID Connect token received from the identity provider Once Amazon Cognito receives the token it returns a new Amazon Cognito ID for the user and a set of temporary limited privilege AWS credentials Each Amazon Cognito identity has access only to its own data in the sync store and this data is encrypted when stored In addition all identity data is transmitted over HTTPS The unique Amazon Cognito identifier on the device is stored in the appropriate secure location —on iOS for example the Amazon Cognito identifier is stored in the iOS keychain User data is cached in a local SQLite database within the application’s sandbox; if you require additional security you can encrypt this iden tity data in the local cache by implementing encryption in your application Amazon Mobile Analytics Amazon Mobile Analytics is a service for collecting visualizing and understanding mobile application usage data It enables you to track customer behavio rs aggregate metrics and identify meaningful patterns in your mobile applications Amazon Mobile Analytics automatically calculates and updates usage metrics as the data is received from client devices running your app and displays the data in the consol e You can integrate Amazon Mobile Analytics with your application without requiring users of your app to be authenticated with an identity provider (like Google Facebook or Amazon) For these unauthenticated users Mobile Analytics works with Amazon Cognit o to provide temporary limited privilege credentials To do this you first create an identity pool in Amazon Cognito The identity pool will use IAM roles which is a set of permissions not tied to a specific IAM user or group but which allows an entity to access ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 85 specific AWS resources The entity assumes a role and receives temporary security credentials for authenticating to the AWS resources defined in the role By default Amazon Cognito creates a new role with limited permissions – end users only hav e access to the Amazon Cognito Sync service and Amazon Mobile Analytics If your application needs access to other AWS resources such as Amazon S3 or DynamoDB you can modify your roles directly from the IAM management console You can integrate the AWS Mo bile SDK for Android or iOS into your application or use the Amazon Mobile Analytics REST API to send events from any connected device or service and visualize data in the reports The Amazon Mobile Analytics API is only accessible via an SSL encrypted end point ( https://mobileanalyticsus east 1amazonawscom ) Applications AWS applications are managed services that enable you to provide your users with secure centralized storage and work area s in the cloud Amazon WorkSpaces Amazon WorkSpaces is a managed desktop service that allows you to quickly provision cloud based desktops for your users Simply choose a Windows 7 bundle that best meets the needs of your users and the number of WorkSpaces that you would like to launch Once the WorkSpaces are ready users receive an email informing them where they can download the relevant client and log into their WorkSpace They can then access their cloud based desktops from a variety of endpoint device s including PCs laptops and mobile devices However your organization’s data is never sent to or stored on the end user device because Amazon WorkSpaces uses PC overIP (PCoIP ) which provides an interactive video stream without transmitting actual data The PCoIP protocol compresses encrypts and encodes the users’ desktop computing experience and transmits ‘pixels only’ across any standard IP network to end user devices In order to access their WorkSpace users must sign in using a set of unique credentials or their regular Active Directory credentials When you integrate Amazon WorkSpaces with your corporate Active Directory each WorkSpace joins your Active Directory domain and can be man aged just like any other desktop in your organization This means that you can use Active Directory Group Policies to manage your users’ ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 86 WorkSpaces to specify configuration options that control the desktop If you choose not to use Active Directory or othe r type of on premises directory to manage your user WorkSpaces you can create a private cloud directory within Amazon WorkSpaces that you can use for administration To provide an additional layer of security you can also require the use of multi factor authentication upon sign in in the form of a hardware or software token Amazon WorkSpaces supports MFA using an on premise Remote Authentication Dial in User Service (RADIUS) server or any security provider that supports RADIUS authentication It current ly supports the PAP CHAP MS CHAP1 and MS CHAP2 protocols along with RADIUS proxies Each Workspace resides on its own EC2 instance within a VPC You can create WorkSpaces in a VPC you already own or have the WorkSpaces service create one for you autom atically using the WorkSpaces Quick Start option When you use the Quick Start option WorkSpaces not only creates the VPC but it performs several other provisioning and configuration tasks for you such as creating an Internet Gateway for the VPC settin g up a directory within the VPC that is used to store user and WorkSpace information creating a directory administrator account creating the specified user accounts and adding them to the directory and creating the WorkSpace instances Or the VPC can be connected to an on premises network using a secure VPN connection to allow access to an existing on premises Active Directory and other intranet resources You can add a security group that you create in your Amazon VPC to all the WorkSpaces that belong t o your Directory This allows you to control network access from Amazon WorkSpaces in your VPC to other resources in your Amazon VPC and on premises network Persistent storage for WorkSpaces is provided by Amazon EBS and is automatically backed up twice a day to Amazon S3 If WorkSpaces Sync is enabled on a WorkSpace the folder a user chooses to sync will be continuously backed up and stored in Amazon S3 You can also use WorkSpaces Sync on a Mac or PC to sync documents to or from your WorkSpace so that y ou can always have access to your data regardless of the desktop computer you are using Because it’s a managed service AWS takes care of several security and maintenance tasks like daily backups and patching Updates are delivered automatically to your WorkSpaces during a weekly maintenance window You can control how patching is configured for a user’s WorkSpace By default Windows Update is turned on but you have the ability to customize these settings or use an alternative patch management approach if you desire For the underlying OS Windows Update is enabled by default ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 87 on WorkSpaces and configured to install updates on a weekly basis You can use an alternative patching approach or to configure Windows Update to perform updates at a time of your choosing You can use IAM to control who on your team can perform administrative functions like creating or deleting WorkSpaces or setting up user directories You can also set up a WorkSpace for directory administration install your favorite Active Direc tory administration tools and create organizational units and Group Policies in order to more easily apply Active Directory changes for all your WorkSpaces users Amazon WorkDocs Amazon WorkDocs is a managed enterprise storage and sharing service with fee dback capabilities for user collaboration Users can store any type of file in a WorkDocs folder and allow others to view and download them Commenting and annotation capabilities work on certain file types such as MS Word and without requiring the applic ation that was used to originally create the file WorkDocs notifies contributors about review activities and deadlines via email and performs versioning of files that you have synced using the WorkDocs Sync application User information is stored in an Ac tive Directory compatible network directory You can either create a new directory in the cloud or connect Amazon WorkDocs to your on premises directory When you create a cloud directory using WorkDocs’ quick start setup it also creates a directory admi nistrator account with the administrator email as the username An email is sent to your administrator with instructions to complete registration The administrator then uses this account to manage your directory When you create a cloud directory using Wo rkDocs’ quick start setup it also creates and configures a VPC for use with the directory If you need more control over the directory configuration you can choose the standard setup which allows you to specify your own directory domain name as well as one of your existing VPCs to use with the directory If you want to use one of your existing VPCs the VPC must have an Internet gateway and at least two subnets Each of the subnets must be in a different Availability Zone Using the Amazon WorkDocs Mana gement Console administrators can view audit logs to track file and user activity by time IP address and device and choose whether to allow users to share files with others outside their organization Users can then control who can access individual fi les and disable downloads of files they share ArchivedAmazon Web Services Amazon Web Services : Overview of Security Processes Page 88 All data in transit is encrypted using industry standard SSL The WorkDocs web and mobile applications and desktop sync clients transmit files directly to Amazon WorkDocs using SSL WorkDocs users can also uti lize Multi Factor Authentication or MFA if their organization has deployed a Radius server MFA uses the following factors: username password and methods supported by the Radius server The protocols supported are PAP CHAP MS CHAPv1 and MS CHAPv2 You choose the AWS Region where each WorkDocs site’s files are stored Amazon WorkDocs is currently available in the US East (Virginia) US West (Oregon) and EU (Ireland) AWS Regions All files comments and annotations stored in WorkDocs are automatical ly encrypted with AES 256 encryption Document Revisions Date Description March 2020 Updated compliance certifications hypervisor AWS Snowball February 2019 Added information about deleting objects in Amazon S3 Glacier December 2018 Edit made to the Amazon Redshift Security topic May 2017 Added section on AWS Config Security Checks April 2017 Added section on Amazon Elastic File System March 2017 Migrated into new format January 2017 Updated regions
General
Optimizing_Electronic_Design_Automation_EDA_Workflows_on_AWS
ArchivedOptimizing Electronic Design Automation (EDA) Workflows on AWS September 2018 This version has been archived For the most recent version of this paper see https://docsawsamazoncom/whitepapers/latest/semiconductordesign onaws/semiconductordesignonawshtmlArchived © 201 8 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its a ffiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Contents Abstract vi Introduction 1 EDA Overview 1 Benefits of the AWS Cloud 2 Improved Productivity 2 High Availability and Durability 3 Matching Compute Resources to Requirements 3 Accelerated Upgrade Cycle 4 Paths for Migrating EDA Workflows to AWS 5 Data Access and Transfer 5 Consider what Data to Move to Amazon S3 5 Dependencies 6 Suggested EDA Tools for Initial Proof of Concept (POC) 7 Cloud Optimized Traditional Architecture 7 Buildi ng an EDA Architecture on AWS 8 Hypervisors: Nitro and Xen 9 AMI and Operating System 9 Comp ute 11 Network 15 Storage 15 Licensing 23 Remote Desktops 25 User Authent ication 27 Orchestration 27 Optimizing EDA Tools on AWS 29 Amazon EC2 Instance Types 29 Archived Operating System Optimization 30 Networking 36 Storage 36 Kernel Virtual Memory 37 Security and Governance in the AWS Cloud 37 Isolated Environments for Data Protection and Sovereig nty 38 User Authentication 38 Network 38 Data Storage and Transfer 40 Governance and Monitoring 42 Contributors 44 Document Revisio ns 44 Appendix A – Optimizing Storage 45 NFS Storage 45 Appendix B – Reference Architecture 47 Appendix C – Updating the Linux Kernel Command Line 49 Update a system with /etc/default/grub file 49 Update a system with /boot /grub/grubconf file 50 Verify Kernel Line 50 Archived Abstract Semiconductor and electronics companies using e lectronic design automation (EDA ) can significantly accelerate the ir product development lifecycle and time to market by taking advantage of the near infinite compute storage and resources available on AWS This white paper present s an overview of the EDA workflow recommendations for moving EDA tools to AWS and the specific AWS architectural components to optimize EDA work loads on AWS ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 1 Introduction The workflows applications and methods used for the design and verification of semiconductors integrated circuits (ICs) and printed circuit boards (PCBs) have been largely unchanged since the invention of computer aided engineering (CAE) and electronic design automation (EDA) software However as electr onics systems and integrated circuits have become more complex with smaller geometries the comput ing power and infr astructure requirements to design test validate and build these systems have grown significantly CAE EDA and emerging workloads such as computational lithography and metrology have driven the need for massive scale computing and data management in next generation electronic products In the semiconductor and electronics sector a large portion of the overall design time is spent verif ying components for example in the characterization of intellectual property (IP) cores and for full chip functional and timing verifications EDA support organizations —the specialized IT teams that provid e infrastru cture support for semiconductor companies —must invest in increasingly large server farms and high performance storage systems to enable high er quality and fast er turnaround of semiconductor test and validat ion The introduction of new and upgraded IC fabri cation technologies may require large amounts of compute and storage for relatively short times to enable rapid completion of hardware regression testing or to recharacterize design IP Semiconductor companies today use Amazon Web Services ( AWS ) to take advantage of a more rapid flexible deployment of CAE and EDA infrastructure from the complete IC design workflow from register transfer level (RTL) design to the delivery of GDSII files to a foundry for chip fabrication AWS compute storage and higher level services are available on a dynamic asneeded basis with out the significant up front capital expenditure that is typically required for performance critical EDA workloads EDA Overview EDA workloads comprise workflow s and a supporting set of software tools that enable the efficient design of microelectronics and in particular semiconductor integrated circuits (ICs) Semiconductor design and verification relies on a set of commercial or open source tools collectively referred to as EDA softw are which expedites and reduces time to silicon tape out and fabrication EDA is a highly iterative engineering process that can take from months and in some cases years to produce a single integrated circuit ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 2 The increasing complexity of integrated circuits has resulted in a n increased use of preconfigured or semi customized hardware components collectively known as intellectual property (IP) cores These cores (provided by IP developers as generic gate level netlists ) are either designed inhouse by a semiconductor company or purchased from a third party IP vender IP cores themselves requires EDA workflows for design and verification and to characteriz e performance for specific IC fabrication technologies The se IP cores are used in co mbination with ICspecific custo mdesigned components to create a complete IC that often includes a complex system onchip (SoC) making use of one of more embedded CPUs standard peripherals I/O and custom analog and/or digital components The complet e IC itself with all its IP cores and custom components then requires large amounts of EDA processing for full chip verification —including modeling (that is simulat ing) all of the components within the chip This modeling which includes HDL source level validation physical synthesis and initial verification (for example using techniques such as formal verification) is known as the front end design The physical implementation which includes floor planning place and route timing analysis design rulecheck (DRC) and final verification is known as the back end design When the back end design is complete a file is produced in GDSII format The production of this file is known for historical reasons as tapeout Wh en completed the file is sent to a fabrication facility (a foundry ) which may or may not be operated by the semiconductor company where a silicon wafer is man ufactured This wafer containing perhaps thousands of individual ICs is then inspected cut into dies that are themselves tested packaged into chips that are tested again and assembled onto a board or other system through highly automated manufacturing processes All of these steps in the semiconductor and electronics supply chain can benefit from the scalability of cloud Benefits of the AWS Cloud Before discussing the specific s of moving EDA workloads to AWS it is worth noting the benefits of cloud computing on the AWS Cloud Improved Productivity Organizations that move to the cloud can see a dramatic improvement in development productivity and time to market Your organization can achieve this by scaling out your compute needs to meet the demands of the job s waiting to be processed AWS uses per ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 3 second billing for our compute resources allowing you to optimize cost by only paying for w hat you use down to the second By scaling horizontally you can run more compute servers (that is Amazon Elastic Compute Cloud [Amazon EC2 ] instances) for a shorter period of time and pay the same amount as if you were running fewer servers for a longer period of time For example because the number of compute hours consumed are the same you could complete a 48 hour design regression in just two hours by dynamically growing your cluster by 24X or more in order to run many thousands of pending jobs in parallel These extreme levels of parallelism are commonplace on AWS across a wide variety of industries and performance critical use cases High Availability and Durability Amazon EC2 is hosted in multiple locations worldwide These locations comprise regions and Availability Zones (AZs) Each AWS R egion is a separate geographic area around the wo rld such as Oregon Virginia Ireland and Singapore Each AWS Region where Amazon EC2 runs is designed to be completely isolated from the other regions This design achieves the greatest possible fault tolerance and stability Resources are not replicate d across regions unless you specifically configure your services to do so Within e ach geographic region AWS has multiple isolated locations known as Availability Zones Amazon EC2 provides you the ability to place resources such as EC2 instances and d ata in multiple locations using these Availability Zones Each Availability Zone is isolated but the Availability Zones in a region are connected through low latency links By taking advantage of both multiple regions and multiple Availability Zones you can protect against failures and ensure you have enough capacity to run even your most compute intensive workflows Additionally this large global footprint enables you to position computing resources near your IC design engineers in situations where low latency performance is important For more information refer to AWS Global Infrastructure Matching Compute Resources to Requirements AWS offers many different configurations of hardware called instance families in order to enable customers to match their compute needs with those of their jobs Because of this and the on demand nature of the clo ud you can get the exact systems you need for the exact job you need to perform for only the time you need it ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 4 Amazon EC2 instances come in many different sizes and configurations These configurations are built to support jobs that require both large and small memory footprints high core counts of the latest generation processors and storage requirements from high IOPS to high throughput By right sizing the instance to the unit of work it is best suited for you can achieve high er EDA performance at lo wer overall cost You no longer need to purchase EDA cluster hardware that is entirely configured to meet the demands of just a few of your most demanding jobs Instead you can choose servers launch entire clusters of servers and scale these clusters up and down uniquely optimiz ing each cluster for specific applications and for specific stages of chip development For example consider a situation where you ’re performing gate level simulations for a period of jus t a few weeks such as during the development of a critical IP core In this example y ou might need to have a cluster of 100 machines (representing over 2 000 CPU cores) with a specific memory tocore ratio and a specific storage configuration With AWS you can deploy and run th is cluster dedicated only for this task for only as long as the simulations require and then terminate the cluster when that stage of your project is complete Now consider another situation in which you have multiple semicondu ctor design teams working in different geographic regions each using their own locally installed EDA IT infrastructure This geographic diversity of engineering teams has productivity benefits for modern chip design but it can create challenges in managi ng large scale EDA infrastructure (for example to efficiently utilize globally licensed EDA software ) By using AWS to augment or replace these geographically separated IT resources you can pool all of your global EDA licenses in a smaller number of locations using scalable on demand clusters on AWS As a result you can more rapidly complete critical batch workloads such as static timing analysis DRC and physical verification Accelerated Upgrade Cycle Another important reason to move EDA workloads to the cloud is to gain access to the latest processor storage and network technologies In a typical on premise s EDA installation you must select configure procure and deploy servers and storage d evices with the assumption that they remain in service for multiple years Depending on the selected processor generation and time ofpurchase this means that performance critical production EDA workloads might be running on hardware devices that are already multiple years and multiple processor generations out of date When using AWS you have the opportunity to select and deploy the latest processor generations ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 5 within minutes and configure your EDA clusters to meet the unique needs of each application in your EDA workflow Paths for Migrating EDA Workflows to AWS When you begin the migration of EDA workflows to AWS you will find there are many parallels with managing traditional EDA deployments across multiple data centers Larger organizations in the semiconductor industry typically have multiple data centers that are geographically segregated because of the distributed nature of their design teams These organizations typically choose specific workloads to run in specific locations or replicate and synchronize data to allow for multiple sites to take the load of large scale global EDA workflows If your organization uses this approach you need to consider that the specifics around topics such as data replication caching and license server managem ent depend on many internal and organizational factors Most of the same approaches and design decisions related to multiple data centers also apply to the cloud With AWS you can build one or more virtual data centers that mirror existing EDA data center designs The foundational technologies that enable things like compute resources storage servers and user workstations are available with just a few keystrokes However the real power of using the AWS Cloud for EDA workloads comes from the dynamic capa bilities and enormous scale provided by AWS Data Access and Transfer When you first consider running workloads in the cloud you might envision a bursting scenario where cloud resources are set up as an augmentation to your existing on premises compute cl uster Although you can use this model successfully data movement presents a significant challenge when building an architecture to support bursting in a seamless way Your organization might see the most benefit if you consider bursting on a project byproject basis and choose to run entire workflows on AWS thereby freeing up existing on premises resources to handle other tasks By approaching cloud resources this way you can use much simpler data transfer mechanisms because you are not trying to sync d ata between AWS and your data centers Consider what Data to Move to Amazon S3 Prior to moving your EDA tools to AWS consider the process es and methods that will be in place as you move from initial experiments to full production For example ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 6 consider what data will be needed for an initial performance test or for a first workflow proof of concept (POC) Data is gravi ty and moving only the limited amount of data needed to run your EDA tools to an Amazon Simple Storage Service (Am azon S3) bucket allows for flexibly and agility when building and iterating your architecture on AWS There are several benefits to storing data in Amazon S3; for an EDA POC using Amazon S3 allow s you to iterate quickly as the S3 transfer speed to an EC2 instance is up to 25 Gbps With your data stored in an S3 bucket you can more quickly experiment with different EC2 instance types and also experiment with different working storage options such as creating and tuning temporary shared file systems Deciding what data to transfer is dependent on the tools or designs you are planning to use for the POC We encourage customers to start with a relatively small amount of POC data ; for example only the data required to run a single simulation job Doing so allows you to q uickly gain experience with AWS and build an understanding of how to build production ready architecture on AWS while in the process of running an initial EDA POC workload Dependencies Semiconductor design environments often have c omplex dependencies that can hinder the process of moving workflows to AWS We can work with you to build an initial proof of concept or even a complex architecture However it is the designer ’s or tool engineer’s responsibility to unwind any legacy on premises data dependencies The initial POC process require s effort to determine which dependencies such as shared libraries need to be moved along with project data There are tools available that help you bui ld a list of dependencies and some of these tools yield a file manifest that expedite s the process of moving data to AWS For example one tool is Ellexus Container Checker which can be found on the AWS Marketplace Dependencies can include authentication methods ( for example NIS) shared file systems cross organization collaboration and globally distributed designs (Identifying and managing such dependencies is not unique to cloud migration; semiconductor design teams face similar challenges in any distributed EDA environment) Another approach may be to launch a net new semiconductor project on AWS which should significantly reduce the number of legacy dependencies ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 7 Suggested EDA T ools for Initial Proof of Concept (POC) An HDL compile and s imulation workflow may be the fastest approach to launching an EDA POC on AWS or creating a production EDA environment HDL files are typically not large and the ability to use an on premises license server (via VPN) reduces the additional effort of moving your licensing environment to AWS HDL compile and simulation workflows are representative of other EDA workloads including their need for shared file systems and some form of job scheduling Cloud Optimized Traditional Architecture On AWS compute and storage resources are available on demand allowing you to launch on what you need and when you need it This enables a different approach to architecting your semiconductor design environment Rather than having one large cluster where multiple projects are running you can use AWS to launc h multiple clusters Because you can configure compute resources to increase or decrease on demand you can build clusters that are specific to different parts of the workflow or even specific projects This allows for many benefits including project based cost allocation right size compute and storage and environment isolation Figure 1: Workload specific EDA clusters on AWS ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 8 As seen in Figure 1 moving to AWS allows you to launch a separate set of resources for each of you r EDA work load s (for example a cluster) This multi cluster approach can also be u sed for global and cross organization al collaboration The multi cluster approach can be used for example to dedicate and manage specific cloud resources for specific projects encouraging organizations to use only the resources required for their project Job Scheduler Integration The EDA workflow that you build on AWS can be a similar environment to the one you have in your on premises data center Many if not all of the same EDA tools and applications running in your data center as well as orchestration software can also be run on AWS Job schedulers such as IBM Platform LSF Adaptive PBS Pro and Univa Grid Engine (or their open source alternatives) are typically used in the EDA industry to manage compute resources optimize license usage and coordinate and prioritize jobs When you migrate to AWS you may choose to use these existing schedulers essentially unchanged to minimize the impact on your end user workflows and processes Most of these job schedulers already have s ome form of native integration with AWS allowing you to use the master node to automatically launch cloud resources when there are jobs pending in the queue You should refer to the documentation of your specific job management tool for the steps to autom ate resource allocation and management on AWS Building an EDA Architecture on AWS Building out your production ready EDA workflow on AWS requires an end toend examination of you r current environment This examination begin s with the operating system you are using for running your EDA tools as well as your job scheduling and user management environments AWS allows for a mix of architectures when moving semiconduct or design workloads and you can leverage s ome combination of the following two approaches : • Build an architecture similar to a traditional cluster using traditional job scheduling software but ensuring that a cloud native approach is used • Use more cloud native methods such as AWS Batch which uses containerization o f your applications Where needed we will make the distinction when using AWS Batch can be advantageous for example when running massively parallel parameter sweeps ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 9 Hypervisors: Nitro and Xen Amazon EC2 instances use a hypervisor to divide resources on the server so that each customer has separate CPU memory and storage resources for just that customer’s instance We do not use the hypervisor to share resources between instances except for the T* family On previous generation instance types for ex ample the C4 and R4 families EC2 instances are virtualized using the Xen hypervisor In current generation instances for example C5 R5 and Z1d we are using a specialized piece of hardware and a highly custom ized hypervisor based on KVM This new hyper visor system is called Nitro At the time of this writing these are the Nitro based instances: Z1d C5 C5d M5 M5d R5 R5d Launching Nitro based instances require s that specific drivers for networking and storage be installed and enabled before the in stance can be launched We provide the details for this configuration in the next section AMI and Operating System AWS has built in support for numerous operati ng systems (OSs) For EDA users CentOS Red Hat Enterprise Linux and Amazon Linux 2 are used more than other operating systems The operating system and the customizations that you have made in your on premises environment are the baseline for buildi ng out your EDA architecture on AWS Before you can launch an EC2 ins tance you must decide wh ich Amazon Machine Image (AMI) to use An AMI contains the OS any required OS and driver customizations and may also include the application software For EDA o ne approach is to launch an instance from an existing AMI customize the instance after launch and then save this updated configuration as a custom AMI Instances launched from this new custom AMI include the customizations that you made when you created the AMI ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 10 Figure 2: Use Amazon provided AMI to build a Customized AMI Figure 2 illustrate s the process of launching an instance with an AMI You can select the AMI from the AWS Console or from the A WS Marketplace and then customize that instance with your EDA tools and environment After that you can use the customized instance to create a new customized AMI that you can then use to launch your entire EDA environment on AWS Note also that the cus tomized AMI that you create using this process can be further customized For example you can customize the AMI to add additional application software load additional libraries or apply patches each time the customized AMI is launched onto an EC2 insta nce As of this writing we recommend these OS levels for EDA tools (more detail on OS versions is provided in following sections) : • Amazon Linux and Amazon Linux 2 ( verify certification with EDA tool vendor s) • CentOS 74 or 75 • Red Hat Enterprise Linux 74 or 75 These OS levels have the necessary drivers already included to support the current instance ty pes which include Ni tro based instances If you are not using one of these levels you must perform extra steps to take advantage of the features of our current instances Specifi cally you must build and enable enhanced networkin g which relies on ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 11 the elastic network adaptor (ENA) drivers See Network and Optimizing EDA Tools on AWS for m ore detail ed information on ENA drivers and AMI drivers If you use an instance with Nitro (Z1d C 5 C5d M5 M5d R 5 R5d ) you must use an AMI that has the AWS ENA driver built and en abled and the NVMe drivers installed At this time a Nitro based instance does not launch unless you have these drivers These OS levels include the required drivers : • CentOS 74 or later • Red Hat Enterprise Linux 74 or later • Amazon Linux or Amazon L inux 2 (current versions) To verify that you can launch your AMI on a Nitro based instance first launch the AMI on a Xen based instance type and then run the c5_m5_checks_scriptsh script found on the awslabs GitHub repo at awslabs/aws support tools/EC2/C5M5InstanceChecks/ The script analyze s your AMI and determine s if it can run on a Nitro based instance If it cannot it display s recommended changes You can also import your own on premises image to use for your AMI This process includes extra steps but may result in time savings Before importing an on premises OS image you first require a VM image for y our OS AWS supports certain VM formats (for example Linux VMs that use VMware ESX ) that must be uploaded to an S3 bucke t and subsequently converted into an AMI Detailed information and instructions can be found at https://awsamazoncom/ec2/vm import/ The same operati ng system requirements mentioned above are also applicable to import ed images (that is you shou ld use CentOS/RHEL 74 or 75 Amazon Linux or Amazon Linux 2) Compute Although AWS has many different types and sizes of instances the instance types in the compute optimized and memory optimized categories are typically best suited for EDA workloads When running EDA software on AWS you should choose instances that feature the lat est generations of Intel Xeon processors using a few different configurations to meet the needs of each application in your overall workflow ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 12 The compute optimized instance family features instances that have the highest clock frequencies available on AWS and typically enough memory to run some memory intensive workloads Typical EDA use cases for compute optimized instance types: • Simulations • Synthesis • Formal verification • Regression tests Z1d for EDA Tools AWS has recently announced a powerful new insta nce type that is well optimized for EDA applications The faster clock speed on the Z1d instance with up to 4 GHz sustained Turbo performance allows for EDA license optimization while achieving faster time to results The Z1d uses an AWS specific Intel Xeon Platinum 8000 series (Skylake) processor and is the fastest AWS instance type The following list summarizes the features of the Z1d instance: • Sustained all core frequency of up to 40 GHz • Six different instance sizes with u p to 24 cores (48 threads) per instance • Total memory of 384 GiB • Memory to core ratio of 16 GiB RAM per core • Includes local Instance Store NVMe storage (as much as 18 TiB) • Optimized for EDA and other high performance worklo ads Additional Compute Optimized Instances C5 C5d C4 In addition to the Z 1d t he C5 instance feature s up to 36 cores (72 threads) and up to 144 GiB of RAM The processor used in the C5 is the same as the Z1d the Intel Xeon Platinum 8000 series (Skylake) but also includes a base clock speed of 30 GHz and the ability to turbo boost up to 35 GHz The C5d instance is the same configuration as the C5 but offers as much as 18 TiB of local NVMe SSD storage ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 13 Previous generation C4 ins tances are also commonly used by EDA customers and still remain a suitable option for certain workloads such as those that are not memory intensive Memory Optimized Instances Z1d R5 R5d R4 X1 X1e The Z1d instance is not only compute optimized but m emory optimized as well including 384 GiB of total memory The Z1d has the highest clock frequency of any instance and with the except ion of our X1 and X1e instances is equal to the most memory per core (16 GiB/core) If you require larger amounts of memory than what is available on the Z1d consider another memory optimized instance such as the R5 R5d R4 X1 or X1e Typical EDA use cases for memory optimized instance types: • Place and route • Static timing analysis • Physical verification • Batch mode RTL simulation (multithread optimized tools ) The R5 and R5d have the same processor as the Z1d and C5 the Intel Xeon Platinum 8000 series (Skylake) With the largest R5 and R5d instance types having up to 768 GiB memory E DA workloads that could previously only run on the X1 or X1e can now run on the R5 and R5d instances These recently released instances are serving as a drop in replacement for the R4 instance for both place and route as well as batch mode RTL simulatio n The R416xlarge instance is viable option with a high core count (32) and 15 GiB/core ratio For this reason w e see a large number of customers using the R416xlarge instance type The X1 and X1e instance types can also be used for memory intensive wo rkloads ; however testing of EDA tools by Amazon internal silicon teams has indicate d that most EDA tools will run well on the Z1d R4 R5 or R5d instances The need for the amount of memory provided on the X1 (1952 GiB) and X1d (3904 GiB) has been relatively infrequent for semiconductor design Hyper Threading Amazon EC2 instances support Intel Hyper Threading Technology (HT Technology) which enables multiple threads to run concurrently on a single Intel Xeon CPU core ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 14 Each thread is repr esented as a virtual CPU (vCPU) on the instance An instance has a default number of CPU cores which varies according to instance type Each vCPU is a hyperthread of an Intel Xeon CPU core except for T2 instances You can specify the following CPU option s to optimize your instance for semiconductor design workloads: • Number of CPU cores : You can customize the number of CPU cores for the instance This customization may optimize the licensing costs of your software with an instance that has sufficient amoun ts of RAM for memory intensive workloads but fewer CPU cores • Threads per core : You can disable Intel Hyper Threading Technology by specifying a single thread per CPU core This scenario applies to certain workloads such as high performance computing (HPC) workloads You can specify these CPU options during instance launch (curren tly on support through the AWS Command Line Interface [ AWS CLI] an AWS software development kit [ SDK ] or the Am azon EC2 API only) There is no additional or reduced charge for specifying CPU options You are charged the same amount as instances that are launched with default CPU options Refer to Optimizing CPU Options in the Amazon Elastic Compute Cloud User Guide for Linux Instances for m ore details and rules for specifying CPU options Divide the vCPU number by 2 to find the number of physical cores on the instance You can disable HT Technology if you determine that it has a negative impact on your application ’s performance See Optimizing EDA Tools on AWS for details on disabling Hyper Threading Table 1 lists the instance types that are typically used for EDA tools Table 1: Instance specifications suitable for EDA workloads Instance Name *Max Core Count CPU Clock Frequency Max Total RAM in GiB Memory to core ratio GiB / core Local NVMe Z1d 24 40 GHz 384 16 Yes R5 / R5d 48 Up to 31 GHz 768 16 Yes on R5d R4 32 23 GHz 488 1525 M5 / M5d 48 Up to 31 GHz 384 8 Yes on M5d C5 / C5d 36 Up to 35 GHz 144 4 Yes on C5d ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 15 Instance Name *Max Core Count CPU Clock Frequency Max Total RAM in GiB Memory to core ratio GiB / core Local NVMe X1 64 23 GHz 1952 305 Yes X1e 64 23 GHz 3904 61 Yes C4 18 29 GHz boost to 35 60 333 *NOTE: AWS uses vCPU (which is an Intel Hyper Thread) to denote processors for this table we are using cores Network Amazon e nhanced networking technology enables instances to communicate at up to 25 Gbps for current generation instances and up to 10 Gbps for previous generation instances In addition enhanced networkin g reduces latency and network jitter Enhanced networking is enabled by default on these operating system levels : ▪ Amazon Linux ▪ Amazon Linux 2 ▪ CentOS 74 and 75 ▪ Red Hat Enterprise Linux 74 and 75 If you have an older version of Cent OS or R HEL you can enable enhanced networking by installing the network module and updat ing the enhanced network adapter ( ENA ) support attribute for the instance For more information about enhanced networking including build and install instructions refer to the Enhanced Networking on Linux page in the Amazon Elastic Compute Cloud User Guide for Linux Instances Storage For EDA workloads running at scale on any infrastructure storage can quickly become the bottleneck for pushing jobs through the queue Traditional centralized filers serving network file systems ( NFS ) are commonly purchased from hardware vendors at significant costs in support of high EDA throughout However these centralized filers can quickly become a bottleneck for EDA resulting in increased job run times and correspondingly higher EDA license cost s Planned or unexpected increases in EDA data and the need to access that data across a fast growing EDA cluster means that the filers eventually run out of storage space or become bandwidth constrained by either the network or storage tier ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 16 EDA a pplica tions can take advantage of the wide array of storage options available on the AWS resulting in reduced run times for large batch workloads Achieving these benefits may require some amount of EDA workflow rearchitecting but the benefits of making these optimizations can be numerous Types of Storage on AWS Before discussing the differ ent options for deploying EDA storage it is important to understand the different types of storage services available on AWS Amazon EBS Amazon Elastic Block Store (Amazon EBS) provides persistent block storage volumes for use with Amazon EC2 instances in the AWS cloud EBS volumes are attached to instances over a high bandwidth network fabric and appear as local block storage that can be formatted with a file system on the instance itself Each Amazon EBS volume is automatically replicated within its Availability Zone to protect you from component failure offering high availability and durability Amazon EBS volumes offer the consistent a nd low latency performance required to run semiconductor workloads When selecting your instance type you should select an instance that is Amazon EBS optimized by default An Amazon EBS optimiz ed instance provides dedicated throughput to Amazon EBS whic h is isolated from any other network traffic and an optimized configuration stack to provide optimal Amazon EBS I/O performance If you choose an instance that is not Amazon EBS optimized you can enable Amazon EBS optimization by using ebsoptimized with the modifyinstanceattribute parameter in the AWS CLI but additional charges may apply (cost is include d with instances where Amazon EBS is optimiz ed by default) Amazon EBS is the storage that backs all modern Amazon EC2 instances (with a few exceptions) and is the foundat ion for creating high speed file systems on AWS With Amazon EBS it is possible to achieve up to 80000 IOPS and 1750 MB/s from a single Amazon EC2 instance It is important to choose the correct EBS volume types when building your EDA architecture on AWS Table 2 shows the EBS volumes types that you should consider ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 17 Table 2: EBS Volume Types io1 gp2* st1 sc1 Volume Type Provisioned IOPS SSD General Purpose SSD Throughput Optimized HDD Cold HDD Volume Size 4 GB 16 TB 1 GB 16 TB 500 GB 16 TB 500 GB 16 TB Max IOPS**/Volume 32000 10000 500 250 Max Throughput/Volume 500 MB/s 160 MB/s 500 MB/s 250 MB/s *Default volume type **io1/gp2 based on 16K I/O size st1/sc1 based on 1 MB I/O size When choosing your EBS volume types consider the performance characteristics of each EBS volume This is particularly important when building a NFS server or another file system solutions Achieving the maximum capable performance of an EBS volume depend s on the size of the volume Additionally the gp2 st1 and sc1 volume types use a burst credit system and this should be taken in to consideration as well Each AWS EC2 instance type has a throughput and IOPS limit For example the Z1d12xlarge has EBS limits of 175 GB/s and 80000 IOPS (For a c hart that shows the Amazon EBS throughput expected for each instance type refer to Instance Types that Support EBS Optimization in the Amazo n Elastic Compute Cloud User Guide for Linux Instances ) To achieve these speeds you must stripe multiple EBS volumes together as each volume has its own throughput and IPOS limit Refer to Amazon EBS Volume Types in the Amazon Elastic Compute Cloud User Guide for Linux Instances for detailed information about throughput IOPS and burst credits Enhancing Scalability with Dynamic EBS Volumes Semiconductor design has a long history of over provisioning hardware to meet the demands of backend workloads that may not be run for months or years after the customer specifications are received On AWS you provision only the resources you need when you need them For the typic al on premises EDA cluster IT teams are accustomed to purchasing large arrays of network attached storage even though their initial needs are relatively small ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 18 A key feature of EBS storage is elastic volumes ( available on all current generation EBS volu mes attached to current generation EC2 instances ) This feature allows you to provision a volume that meets your application requirements today and as your requirements change allows you to increase the volume size adjust performance or change the volu me type while the volume is in use You can continue to use your application while the change takes effect An on premises installation normally require s manual intervention to adjust storage configurations Leveraging EBS elastic volumes and other AWS ser vices you can automate the process of resi zing your EBS volumes Figure 3 shows the automated process of increasing the volume size using Amazon CloudWatch (metrics and monitoring service and AWS Lambda (an event driven serverless compute service ) The volume increase event is trigger ed (eg usage threshold) using a CloudWatch alarm and a Lamba function T he resulting increase is automatically detected by the operat ing system and a subsequent file system grow operation resize s the file system Figure 3: Lifecycle for automatically resizing an EBS volume Instance Storage For use cases where the performance of Amazon EBS is not sufficient on a single instance Amazon EC2 instances with Instance Store are available Instance Store is block level storage that is physically attached to the instance As the storage is directly attached to the instance it can provide signi ficantly higher throughput and IOPS than is ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 19 available through network based storage similar to Amazon EBS However because the storage is locally attached to the instance data on the Instance Store does not persist when you stop or terminate the instance Additionally hardware failures on the instance would likely result in data loss For these reasons i nstance Store is recommended for temporary scratch space or for data replicated off of the instance ( for example Amaz on S3) You can increase durability by choosing an instance with multiple NVMe devices and create a RAID set with one or more parity devices The I3 instance family and the recently announced Z1d C5d M5d and R5d instances are wellsuited for EDA workloa ds requiring a significant amount of fast local storage such as scratch data These instances use NVMe based storage devices and are designed for the highest possible IOPS The Z1d and C5d instances each have up to 18 TiB of local instance store and the R5d and M5d instances each have up to 36 TiB of local instance store The i316xlarge can deliver 33 million random IOPS at 4 KB block size and up to 16 GB/s of sequential disk throughput This performance m akes the i316xlarge well suited for serving file systems for scratch or temporary data over NFS Table 3 shows the instance types typically found in the semiconductor space that have instance store Tab le 3: Instances typically found in the EDA space with Instance Store Instance Name Max Raw Size TiB Number and size of NVMe SSD (GiB) I3 152 TiB 8 x 1 920 Z1d 18 TiB 2 x 900 R5d 36 TiB 4 x 900 M5d 36 TiB 4 x 900 C5d 18 TiB 2 x 900 X1 3840 TiB 2 x 920 X1e 3840 TiB 2 x 1920 The data on NVMe instance storage is encrypted using an XTS AES 256 block cipher implemented in a hardware module on the instance The encryption keys are generated using the hardware module and are unique to each NVMe instance storage device All ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 20 encryption keys are dest royed when the instance is stopped or terminated and cannot be recovered You cannot disable this encryption and you cannot provide your own encryption key1 NVMe on EC2 Instances Amazon EC2 instances based on the Nitro hypervisor feature local NVMe SSD st orage and also expose Amazon Elastic Block Store (Amazon EBS ) volumes as NVMe block devices This is why certain operating system levels are required for Nitro based instances In other words only an AMI that has the required NVMe drives installed allows you to launch a Nitro based instance See AMI and Operating System for instructions on verify ing that your AMI will run on a Nitro based instance If you use EBS volumes on Nitro based instances configure two kernel settings to ensure optimal performance Refer to the NVMe EBS Volumes page of the Amazon Elastic Compute Cloud User Guide for Linux Instances for more information Amazon Elastic File System ( Amazon EFS) You can opt for building your own NFS file server on AWS (discussed in the “Traditional NFS File System” section) or you can launch a shared NFS file system using Amazon Elastic File System ( Amazon EFS) Amazon EFS provides simple scalable NFS based file s torage for use with Amazon EC2 instances in the AWS Cloud A fully managed petabyte scale file system Amazon EFS provides a simple interface that enables you to create and configure file systems quickly and easily With Amazon EFS storage capacity is elastic increasing and decreasing automatically as you add and remove files so your applications have the storage they need when they need it Amazon EFS is designed for high availability and durability and can deli ver high throughput when deployed at scale The data stored on an EFS file system is redundantly stored across multiple Availability Zones In addition a n EFS file system can be accessed concurrently from all Availability Zones in the region where it is l ocated However because all Availability Zones must acknowledge file system actions ( that is create read update or delete) latency can be higher than traditional shared file systems that do not span multiple Availability Zones Because of this it is important to test your workload s at scale to ensure EF S meets your performance requirements Amazon S3 Amazon Simple Storage Service (Amazon S3) is object storage with a simple web service interface to store and retrieve any amount of data from anywhere o n the web It is designed to deliver 99999999999% durability and scale to handle millions of ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 21 concurrent requests and grow past trillions of objects worldwide Amazon S3 offerings include following range of storage classes • Amazon S3 Standard for general purpose storage of frequently accessed data • Amazon S3 Standard – IA (for i nfrequent access ) for long lived but less frequently accessed data • Amazon Glacier for long term data archiv al Amazon S3 also offers configurable lifecycle policie s for managing your objects so that they are stored cost effectively throughout their lifecycle Amazon S3 is accessed via HTTP REST requests typically through the AWS software development kits (SDKs) or the AWS Command Line Interface (AWS CLI) You can us e the AWS CLI to copy data to and from Amazon S3 in the same way that you copy data to other remote file system s using ls cp rm and sync command line operations For EDA workflows we recommend that you consider Amazon S3 for your primary data storage solution to manag e data uploads and downloads and to provide high data durability For example y ou can quickly and efficiently cop y data from Amazon S3 to Amazon EC2 instances and Amazon EBS storage to populate a high performan ce shared file system prior to launching a large batch regression test or timing analysis However we recommend that you do not use Amazon S3 to directly access (read /write) individual files during the runtime of a performance critical application The be st architectures for high performance data intensive computing available on AWS consist of Amazon S3 Amazon EC2 Amazon EBS and Amazon EFS to balance performance durability scalability and cost for each specific application Traditional NFS File Systems For EDA workflow migration the first and most popular option for migrating storage to AWS is to build systems similar to your onpremise s environment This option enables you to migrate applications quickly without having to rearchitect your applicat ions or workflow With AWS it’s simple to create a storage server by launching an Amazon EC2 instance with adequate bandwidth and Amazon EBS throughput attaching the appropriate EBS volumes and sharing the file system to your compute nodes using NFS When building storage systems for the immense scale that EDA can require for large scale regression and verification tests there are a number of approaches you can take to ensure your storage systems are able to handle the throughput ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 22 The largest Amazon EC2 instances support 2 5 Gbps of network bandwidth and up to 80000 I OPS and 1750 MB/s to Amazon EBS If the data is temporary or scratch data you can use an instance with NVMe volumes to optimize the backend storage For example you can use the i316xl arge with 8 NVMe volumes that is capable of up to 16GB/s and 3M IOPS for local access The 25 Gbps network connection to the i316xlarge then becomes the bottleneck and not the backend storage This setup results in an NFS that is capable of 25 GB/s For EDA workloads that require more performance in aggregate than can be provided by a single instance you can build multiple NFS servers that are delegated to specific mount points Typically this means that you build servers for shared scratch tools directories and individual projects By building servers in this way you can right size the server and the storage allocated to it according to the demands of a specific workload When projects are finished you can archive the data to a low cost long term storage solution like Amazon Glacier Then you can delete the storage server thereby saving additional cost When building the storage servers you have many options Linux software raid ( mdadm ) is often a popular choice for its ubiquity and stability However in recent years ZFS on Linux has grown in popularity and customers in the EDA space use it for the data protection and expansion features that it provides If you use ZFS it’s relatively simple to build a solution that pools a group of EBS volumes together to ensure higher performance of the volume set up automatic hourly snapshots to provide for point in time rollbacks and replicate data to backup servers that are in other Availability Zones to provide for fault tolerance Although out of the scope of this document if you want more automated and managed solutions consider AWS partner storage solutions Examples of partners that provid e solutions for running high performance storage on AWS include SoftN AS WekaIO and NetApp Cloud Nat ive Storage Approaches Because of its low cost and strong scaling behaviors Amazon S3 is wellsuited for EDA workflows because you can adapt the workflows to reduce or eliminate the need for traditional shared storage systems Cloud optimized EDA workflow s use a combination of Amazon EBS storage and Amazon S3 to achieve extreme scalability at very low costs without being bottlenecked by traditional storage systems ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 23 To take advantage of a solution like this your EDA organization and your supporting IT teams might need to untangle many years of legacy tools file system sprawl and large numbers of symbolic links in order to understand what data you need for specific projects (or job deck) and prepackage the data along with the job that requires it The typical first step in this approach is to separate out the static data ( for example application binaries compilers and so on ) from dynamically changing data and IP in order to build a front end workflow that doesn’t re quire any shared file systems This is an important step for optimized cloud migration and also provides the benefit of increasing the scalability and reliability of legacy on premise s EDA workflows By using this less NFS centric approach to manag e EDA storage operating system images c an be regularly updated with static assets so that they’re available when the instance is launched Then when a job is dispatched to the instance it can be configured to first download the dynamic data from Amazon S3 to local or Amazon EBS storage before launching the application When complete results are uploaded back to Amazon S3 to be aggregated and processed when all jobs are finished This method for decoupling compute from storage can provide substantial performance and reliability benefit s in pa rticular for frontend RTL batch regressions Licensing Application licensing is required for most EDA workloads both on premises and on AWS From a technical standpoint managing and accessing licenses is unchanged when migrating to AWS License Server Access On AWS each Amazon EC2 instance launched is provided with a unique hostname and hardware (MAC) address using Amazon elastic network interfaces that cannot be cloned or spoofed Therefore traditional license server technologies ( such as Flexera) work natively on AWS without any modification The inability to clone license servers which is prevented by AWS by not allowing the duplication of MAC addresses also provides EDA software vendors with increased confidence that EDA software can be deployed and used in a secure manner Because of the connectivity options available which include the use of VPNs and AWS Direct Connect you can run your license servers on AWS using an Amazon EC2 instance or within your own data centers By a llowing connectivity through a VPN or AWS Direct Connect between cloud resources and on premise s license servers AWS enables users to ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 24 seamlessly run workloads in any location without having to split licenses and dedicate them to specific groups of compute resourc es Figure 4: License server deployment scenarios Licensed applications are sometimes sensitive to network latency and jitter between the execution host and the license server Although internet based VPN is often a good choice f or connecting to AWS from your corporate datacenter network latency over the Internet can vary affecting performance and reliability of some licensed applications Alternatively a private dedicated connection from your on premises network to the neares t AWS Region using AWS Direct Connect can provide a reliable network connection with consistent latency Improving License Server Reliability License servers are critical components in almost any EDA computing infrastructure A loss of license services can bring engineering work to a halt across the enterprise Hosting licenses in the AWS Cloud can provide improved reliability of license services with the use of a floating elastic network interface (ENI) These ENIs have a fixed immutable MAC address that can be associated with software license keys The implementation of this high availability solution begins with the creation of an ENI that is attached to a license server instance Your license keys are associated with this network interface If a failure is detected on this instance you or your custom automation can detach the ENI and attach it to a standby license server Because the ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 25 ENI maintains its IP and MAC address es network traffic begins flowing to the standby instance as soon as you attach the network interface to the replacement instance This unique capability enables license administrators to provide a level of reliability that can be difficult to achieve using on premises servers in a traditional datacenter This is another exampl e of the benefits of the elastic and programmable nature of the cloud Working with EDA Vendors AWS works closely with thousands of independent software vendors (ISVs) that deliver solutions to customers on AWS using methods that may include software as a service (SaaS ) platform as a service ( PaaS ) customer self managed and bring your own license (BYOL ) models In the semiconductor sector AWS works closely with major vendors of EDA software to help optimize performance scalability cost and applicatio n security AWS can assist ISVs and your organization with deployment best practices as described in this whitepaper EDA vendors that are members of the AWS Partner Network (APN) have access to a variety of tools training and support that are provided directly to the EDA vendor which benefits EDA end customers These Partner Programs are designed to s upport the unique technical and business requirements of APN members by providing them with increased support from AWS including access to AWS partner team members who specialize in design and engineering applications In addition AWS has a growing number of Consulting P artners who can assist EDA vendors and their customers with EDA cloud migration Remote Desktops While the majority of EDA workloads are executed as batch jobs (see Orchestration ) EDA users may at times require direct console access to compute servers or use applications that are graphical in nature For example it might be necessary to view waveforms or step through a simulation to identify and reso lve RTL regression errors o r it might be necessary to view a 2D or 3D graphical representation of results generated during signal integrity analysis Some applications such as printed circuit layout software are inherently interactive and require a high quality low latency user experience There are multiple ways to deploy remote desktops for such applications on AWS You have the option of using open source software such as V irtual Network Computing (VNC) or commercial remote desktop solutions available from AWS partners You can ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 26 also make use of AWS solutions including NICE desktop cloud visualization ( NICE DCV ) and Amazon Work Spaces NICE DCV NICE Desktop Cloud Visualization is a remote visualization technology that enables users to securely c onnect to graphic intensive 3D applications hosted on a n Amazon EC2 instance With NICE DCV you can provide high performance graphics processing to remote users by creating secure client sessions This enables your interactive EDA users to use resource intensive applications with relatively low end client computers by using one or more EC2 instances as remote desktop servers including GPU acceleration of graphics rendered in the cloud In a typical NICE DCV scenario for EDA a graphic intensive applicatio n such as a 3D visualization of an electromagnetic field simulation or a complex interactive schematic capture session is hos ted on a high performance EC2 instance that provides a high end GPU fast I/O capabilities and large amounts of memory The N ICE DCV server software is installed and configured on a server (an EC2 instance) and it is used to create a secure session You use a NICE DCV client to remotely connect to the session and use the application hosted on the server The server uses its hard ware to perform the high performance processing required by the hosted application The NICE DCV server software compresses the visual output of the hosted application and streams it back to you as an encrypted pixel stream Your NICE DCV client receives t he compressed pixel stream decrypts it and then outputs it to your local display NICE DCV was specifically designed for high performance technical applications and is an excellent choice for EDA in particular if you are using Red Hat Enterprise Linux or CentOS operating systems on your remote desktop environment NICE DCV also supports modern Linux desktop environments including modern Linux desktops such as Gnome 3 on RHEL 7 NICE DCV uses the latest NVIDIA Grid SDK technologies such as NVIDIA H264 hardware encoding to improve performance and reduce system load NICE DCV also supports lossless quality video compression when the network and processor conditions allow and it automatically adapts the video compression levels based on the network's available bandwidth and latency ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 27 Amazon Workspaces Amazon WorkSpaces is a managed secure cloud desktop service You can use Amazon WorkSpaces to provision either Windows or Linux desktops in just a few minutes and quickly scale to provide thousands of deskto ps to workers across the globe You can pay either monthly or hourly just for the WorkSpaces you launch which helps you save money when compared to traditional desktops and on premises virtual desktop infrastructure (VDI) solutions Amazon WorkSpaces hel ps you eliminate the complexity in managing hardware inventory OS versions and patches and VDI which helps simplify your desktop delivery strategy With Amazon WorkSpaces your users get a fast responsive desktop of their choice that they can access an ywhere anytime from any supported device Amazon WorkSpaces offers a range of CPU memory and solid state storage bundle configurations that can be dynamically modified so you have the right resources for your applications You don’t have to waste time trying to predict how many desktops you need or what configuration those desktops should be helping you reduce costs and eliminate the need to over buy hardware Amazon WorkSpaces is an excellent choice for organizations wanting to centrally manage remote desktop users and applications and for users that can make use of Windows or Amazon Linux 2 for the remote desktop environment User Authentication User authentication is covered in more detail in the Security and Governance in the AWS Cloud section but AWS offers several options for connecting with an on premises authentication server migrating users to AWS or archit ecting an entirely new authentication solution Orchestration Orchestration refers to the dynamic management of compute and storage resources in an EDA cluster as well as the management (scheduling and monitoring) of individual jobs being processed in a c omplex workflow for example during RTL regression testing or IP characterization For these and many other typical EDA workflows the efficient use of compute and storage resources —as well as the efficient use of EDA software licenses —depends on having a wellorchestrated well architected batch computing environment ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 28 EDA workload management gains new levels of flexibility in the cloud making resource and job orchestration an important consideration for your workload AWS provides a range of solutions fo r workload orchestration: fully managed services enable you to focus more on job requests and output over provisioning configuring and optimizing the cluster and job scheduler while self managed solutions enable you to configure and maintain cloud native clusters yourself leveraging traditional job schedulers to use on AWS or in hybrid scenarios Describing all possible methods of orchestration for EDA is beyond the scope of this document; however it is important to know that the same orchestration meth ods and job scheduling software used in typical legacy EDA environments can also be used on AWS For example commercial and open source job scheduling software can be migrated to AWS and be enhanced by the addition of Auto Scaling (for dynamic resizing of EDA clusters in response to d emand or other triggers) CloudW atch (for monitoring the compute environment for example CPU utilization and server health) and other AWS services to increase performance and security while reducing costs CfnCluster CfnC luster (cloud formation cluster) is a framework that deploys and maintains high performance computing clusters on Amazon Web Services (AWS) Developed by AWS CfnCluster facilitates both quick start proof of concepts (POCs) and production deployments CfnC luster supports many different types of clustered applications including EDA and can easily be extended to support different frameworks CfnCluster integrates easily with existing job scheduling software and can automatically launch servers in response to queue depths and other triggers CfnCluster is also able to launch shared file systems cluster head nodes license servers and others resources CfnCluster is open source and easily extensible for your unique workflow requirements AWS Batch AWS Bat ch is a fully managed service that enables you to easily run large scale compute workloads on the cloud including EDA jobs without having to worry about resource provisioning or managing schedulers Interact with AWS Batch via the web console AWS CLI o r SDKs AWS Batch is an excellent alternative for managing massively parallel workloads ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 29 EnginFrame EnginFrame is an HPC portal that can be deployed on the cloud or on premise EnginFrame is integrated with a wide range of open source and commercial batch scheduling systems and is a o nestop shop for job submission control and data management All of the preceding options (CfnCluster AWS Batch and EnginFrame) as well as partner provided solutions are being successfully deployed by EDA users on AWS Discuss your specific orchestration needs with an AWS technical specialist Optimizing EDA Tools on AWS EDA software tools are critical for modern semiconductor design and verification Increa sing the performance of EDA software —measured both as a function of individual job run times and on the completion time for a complete set of EDA jobs —is important to reduce time toresults/time totapeout and to optimize EDA license costs To this point we have covered the solution components for your architecture on AWS Now in an effort to be more prescriptive we present specific recommendations and configura tion parameters that should help you achi eve expected performance for your EDA tools Choosing the right Amazon EC2 instance type and the right OS level optimizations is critical for EDA tools to perform well This section provides a set of recommendations that are based on actual daily use of ED A software tools on AWS —usage by AWS customers and by Amazon internal silicon design teams The recommendations include such factors as instance type and configuration as well as OS recommendations and other tunings for a representative set of EDA tools These recommendations have been tested and validated internally at AWS and with EDA customers and vendors Amazon EC2 Instance Types The following table highlights EDA tools and provides corresponding Amazon EC2 instance type recommendations ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 30 Table 4: EDA tools and corresponding instance type Instance Name *Max Core Count CPU Clock Frequency Max Total RAM in GiB & (GiB/core) Local NVMe Typical EDA Application Z1d 24 40 GHz 384 (16) Y Formal verification RTL Simulation Batch RTL Simulation Interactive RTL Gate Level Simulation R5 / R5d 48 Up to 31 GHz 768 (16) Y (R5d) RTL Simulation Multi Threaded R4 32 23 GHz 488 (1525) RTL Simulation Multi Threaded Place & Route M5 / M5d 48 Up to 31 GHz 384 (16) Y (M5d) Remote Desktop Sessions C5 / C5d 36 Up to 35 GHz 144 (4) Y (C5d) RTL Simulation Interactive RTL Gate Level Simulation X1 64 23 GHz 1952 (305) Y Place & Route Static Timing Analysis X1e 64 23 GHz 3904 (61) Y Place & Route Static Timing Analysis C4 18 29 GHz (boost to 35 ) 60 (333) Formal verification RTL Simulation Interactive *NOTE: AWS uses vCPU (which is an Intel Hyper Thread) to denote processors for this table we are using cores Operating System Optimization After you have chosen the instance types for your EDA tools you need to customize and optimize your OS to maximize performance Use a Current Generation Operating System If you are running a Nitro based instance you need to use certain operating system levels If you run a Xen based instance instead you should still use one of these OS levels for EDA workloads ( specifically required for ENA and NVMe drivers) : • Amazon Linux or Amazon Linux 2 • Cent OS 74 or 75 • Red Hat Enterprise Linux 74 or 75 ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 31 Disable Hyper Threading On current generation Amazon EC2 instance families (other than the T2 instance family) AWS instances have Intel Hyper Threading Technology (HT Technology) enabled by default You can disable HT Tech nology if you determine that it has a negative impact on your application ’s performance You can run this command to get detailed information about each core (physical core and Hyper Thread) : $ cat /proc/cpuinfo To view cores and the corresponding online Hyper Thread s use the lscpu –extended command For example consider the Z1d2xlarge which has 4 cores with 8 total Hyper Threads If you run the lscpu –extended command before and after disabling Hyper Threading you c an see which threads are online and offline: $ lscpu extended CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE 0 0 0 0 0:0:0:0 yes 1 0 0 1 1:1:1:0 yes 2 0 0 2 2:2:2:0 yes 3 0 0 3 3:3:3:0 yes 4 0 0 0 0:0:0:0 yes 5 0 0 1 1:1:1:0 yes 6 0 0 2 2:2:2:0 yes 7 0 0 3 3:3:3:0 yes $ /disable_htsh $ lscpu extended CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE 0 0 0 0 0:0:0:0 yes 1 0 0 1 1:1:1:0 yes 2 0 0 2 2:2:2:0 yes 3 0 0 3 3:3:3:0 yes 4 ::: no 5 ::: no 6 ::: no 7 ::: no ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 32 Another way to view the vCPUs pairs ( that is Hyper Threads) of each core is to view the thread_siblings_list for each core This list shows two numbers that indicate Hyper Threads for each core To view all thread siblings you can use the following command or substitute “*” with a CPU number: $ cat/sys/devices/system/cpu/cpu*/topology/thread_siblings_list | sort un 04 15 26 37 Disable HT Using the AWS feature CPU Options To disable Hyper Threading using CPU Options use the AWS CLI with runinstances and the cpuoptions flag The following is an example with the Z1d12xlarge: $ aws ec2 run instances imageid ami asdfasdfasdfasdf \ instance type z1d12xlarge cpuoptions \ "CoreCount=24ThreadsPerCore=1" keyname My_Key_Name To verify the CpuOptions were set use describeinstances : $ aws ec2 describe instances instance ids i1234qwer1234qwer "CpuOptions": { "CoreCount": 24 "ThreadsPerCore": 1 } Disable HT on a Running System You can run the following script on a Linux instance to disable HT Technology while the system is running This can be set up to run from an init script so that it applies to any instance when you launch the instance See the following example ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 33 for cpunum in $(cat/sys/devices/system/cpu/cpu*/topology/thread_siblings_list | \ sort un | cut s d f2) do echo 0 | sudo tee /sys/devices/system/cpu/cpu${cpunum}/online done Disable HT Using the Boot F ile You can also disable HT Technology by setting the Linux kernel to only initialize the first set of threads by setting maxcpus in GRUB to be half of the vCPU count of the instance For example the maxcpus value for a Z1d12 xlarge instance is 24 to disable Hyper Threading : GRUB_CMDLINE_LINUX_DEFAULT="console=tty0 console=ttyS0115200n8 netifnames=0 biosdevname=0 nvme_coreio_timeout=4294967295 maxcpus=24 Refer to Appendix C – Updating the Linux Kernel Command Line for instructions on updating the kernel command line When you d isabl e HT Technology it does not change the workload density per server because these tools are demanding on DRAM size and reducing the number of threads only help s as GB/core increases Change C locksource to TSC On previous generation instances that are using the Xen hypervisor consider updating the clocksource to TSC as the default is the Xen pvclock (whi ch is in the hypervisor) To avoid communication with the hypervisor and use the CPU clock instead use tsc as the clocksource The tsc clocksource is not supported on Nitro instances The default kvmclock clocksource on these instance types provides similar performance benefits as tsc on previous generation Xen based instances To change the clocksource on a Xen based instance run the following command : $ sudo su c "echo tsc > /sys/devices/system/cl*/cl*/current_cl ocksource " ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 34 To verify that the clocksource is set to tsc run the following command : $ cat /sys/devices/system/cl*/cl*/current_clocksource tsc You set the clock source in the initialization scripts on the instance You can also verify that the clocksource change d with the dmesg command as shown below : $ dmesg | grep clocksource clocksource: Switched to clocksource tsc Limiting Deeper C states (Sleep State) Cstates control the sleep levels that a core may enter when it is inactive You may want to control C states to tune your system for latency versus performance Putting cores to sleep takes time and although a sleeping core allows more hea droom for another core to boost to a higher frequency it takes time for that sleeping core to wake back up and perform work GRUB_CMDLINE_LINUX_DEFAULT="console=tty0 conso le=ttyS0115200n8 netifnames=0 biosdevname=0 nvme_coreio_timeout=4294967295 in tel_idlemax_cstate=1" Refer to Appendix C – Updating the Linux Kernel Command Line for instructions on updating the kernel command line For more information about Amazon EC2 instance processor states refer to the Processor State Control for Your EC2 Instance page in the Amazon Elastic Compute Cloud User Guide for Linux Instances Enable Turbo Mode (Processor State) on Xen Based Instances For our current Nitro based instance types you cannot change turbo mode as this is already set to the optimized value for each instance ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 35 If you are running on a Xen based instance that is using a n entire socket or multiple sockets ( for example r416xlarge r48xlarge c48xlarge) you c an take advantage of the turbo frequency boost especially if you have disabled HT Technology Amazon Linux and Amazon Linux 2 have turbo mode enabled by default b ut other distributions may not To ensure that turbo mode is enabled run the following command: sudo su c "echo 0 > /sys/devices/system/cpu/intel_pstate/no_turbo" For more information about Amazon EC2 instance processor states refer to the Processor State Control for Your EC2 Instance page in the Amazon Elastic Compute Cloud User Guide for Linux Instances Change to Optimal Spinlock S etting on Xen Based Instances For the instances that are using the Xen hypervisor (not Nitro) you should update the spinlock setting Amazon Linux Amazon Linux 2 and other distributions by default implement a paravirtualized mode of spinlock that is optimized for l owcost preempting virtual machines ( VMs ) This can be expensive from a performance perspective because it causes the VM to slow down when running multithreaded with locks Some EDA tools are not optimized for multi core and consequently rely heavily on sp inlocks Accordingly we recommend that EDA customers disable paravirtualized spinlock on EC2 instances To disable the paravirtualized mode of spinlock on a Xen based instnace add xen_nopvspin=1 to the kernel command line in /boot/grub/grubconf and restart The following is an e xample kernel command : kernel /boot/vmlinuz 44413655amzn1x86_64 root=LABEL=/ console=tty1 console=ttyS0 selinux=0 xen_nopvspin=1 Refer to Appendix C – Updating the Linux Kernel Command Line for instructions on updating the kernel command line ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 36 Networking AWS Enhance d Networking Make sure to use enhanced networking for all instances which is a requirement for launching our current Nitro based instances For more information about enhanced networking including build and install instructions refer to the Enhanced Networking on Linux page in the Amazon Elastic Compute Cloud User Guide for Linux Instances Cluster Placement Groups A cluster placement group is a logical grouping of instances within a single Availability Zone Cluster placement groups provide nonblocking non oversubscribed fully bisectional connectivity In other words all instances within the placement group can communicate with all other nodes within the placement group at the full line rate of 10 Gpbs flows and 25 Gpbs aggr egate without any sl owing due to over subscription For more information about placement groups refer to the Placement Groups page in the Amazon Elastic Compute Cloud User Guide for Linux Instances Verify Network Bandwidth One method to ensure you are configuring ENA correctly is to benchmark the instance to instance network performance with iperf3 Refer to Network Throughput Benchmark Linux EC2 for more information Storage Amazon EBS Optimization Make sure to choose your instance and EBS volumes to suit the storage requirements for your workloads Each EC2 instance type has an associated EBS limit and each EBS volume type has limits as well For example the m416xlarge instance type has a io1 volume type with a maximum throughput of 500MB/s NFS Configuration and Optimization Prior to setting up an NFS server on AWS you need to enable Amazon EC2 enhanced networking We recommend using Amazon Linux 2 for your NFS server AMI A crucial part of high performing NFS are the mount parameters on the client For example: ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 37 rsize=1048576wsize=1048576hardtimeo=600retrans=2 A typical EFS mount command is shown in following example : $ sudo mount t nfs4 –o \ nfsvers=41 rsize=1048576wsize=1048576hardtimeo=600retrans=2 \ filesystemidefsaws regionamazonawscom:/ /efs mountpoint When bui lding an NFS server on AWS choose the correct instance size and number of EBS volumes Within a single family larger instanc es typically have more network and Amazon EBS bandwidth available to them The largest NFS servers on AWS are often built using m416xlarge instances with multiple EBS volumes striped together in order to achieve the best possible performance Refer to Appendix A – Optimizing Storage for more information and diagrams for building an NFS server on AWS Kernel Virtual Memory Typical operating system distributions are not tuned for large machines like th ose offered by AWS for EA workloads As result out of the box configurations often have suboptimal performance settings for kernel network buffers and storage page cache background draining While the specific numbers may vary by instance size and applications runs the AW S EDA team has found that these kernel configuration settings and values are a good starting point to optimize memory utilization of the instances : vmmin_free_kbytes=1048576 vmdirty_background_bytes=107374182 Security and Governance in the AWS Cloud The cloud offers a wide array of tools and configurations that enable your organization to protect your data and IP in ways that are difficult to achieve with traditional on premise s environments This section outlines some of the ways you can protect data in the AWS Cloud ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 38 Isolated Environments for Data Protection and Sovereignty Security groups are similar to firewalls —they ensure that access to specific resources is tightly controlled Subnets containing compute and storage resources can be isolated so that they do not have any direct access to the internet Users who need to access the environment must first connect to the Bastian Node (also referred to as a jump box ) through secure protocols like SSH From there they can log into interactive desktops or job schedulers as permitted through your organization ’s security policies Often secure FTP is required in isolated environment s Organization s commonly use secure FTP to download tools from vendors copy completed designs to fabri cation facilities or to update IP from suppliers To do this securely you can set up an FTP client in an isolated subnet that has limited access to external IP addresses as necessary Segment this client from the rest of the network and configure strict controls and monitoring to ensure that everything on that server is secure User Authentication When managing users and access to compute nodes you can adapt the technologies that you use today to work in the same way on AWS Many organizations already h ave existing LDAP Microsoft Active Directory or NIS services that they use for authentication Almost all of these services provide replication and functionality to support multiple data centers With the appropriate network and VPN setup in place you c an manag e these systems on AWS using the same methods and configurations as you do for any remote data center configuration If your organization wants to run an isolated directory on the cloud you have a number of options to choose from If you want to use a managed solution AWS Directory Service for Microsoft Active Directory (Standard) is a popular choice 2 AWS Micros oft AD (Standard Edition) is a managed Microsoft Active Directory (AD) that is optimized for small and midsize businesses (SMBs) Other options include running your own LDAP or NIS infrastructure on AWS and more current solutions like FreeIPA Network AWS employs a number of technologies that allow you to isolate components from each other and control access to the network ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 39 Amazon VPC Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define You have complete control over your virtual networking environment including selection of your own IP address range creation of subnets and configuration of route tables and network gateways You can use both IPv4 and IPv6 in your VPC for secure and easy access to resources and applications You can easily customize the network configuration for your Amazon VPC For example you can create a public facing subnet for your FTP and Bastian servers that has access to the internet Then you can place your design and engineering systems in a private subnet with no internet access You can leverage multiple layers of security including security groups and network access control lists to help control access to EC2 instances in each subnet Additionally you can create a hardware virtual private network (VPN) connection between your corporate data center and your VPC and leverage the AWS Cloud as an extension of your organization’s data center Security Groups Amazon VPC provides advanced security features such as security groups and network access control lists to enable inbound and outbound filtering at the instance level and subnet level respectively A security group acts as a virtual firewa ll for your instance to control inbound and outbound traffic When you launch an instance in a VPC you can assign the instance to up to five security groups Network access control lists ( ACLs ) control inbound and outbound traffic for your subnets In mo st cases security groups can meet your needs However you can also use network ACLs if you want an additional layer of security for your VPC For more information refer to the Security page in the Amazon Virtual Private Cloud User Guide You can create a flow log on your Amazon VPC or subnet to capture the traffic that flows to and from the network interfaces in your VPC or subnet You can also create a flow log on an individual network interface Flow logs are published to Amazon CloudWatch Logs ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 40 Data Storage and Transfer AWS o ffers many ways to protect dat a both in transit and at rest Many third party storage vendors also offer additional encryption and security technologies in their own implementations of storage in the AWS Cloud AWS Key Management Service ( KMS ) AWS Key Management Service ( KMS) is a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data In addition it uses Hardware Security Modules (HSMs) to protect the security of your keys AWS KMS is integrated with other AWS services including Amazon EBS Amazon S3 Amazon Redshift Amazon Elastic Transcoder Amazon WorkMail Amazon Relational Database Service (Amazon RDS) and others to help you protect the dat a you store with these services AWS KMS is also integrated with AWS CloudTrail to provide you with logs of all key usage to help meet your regulatory and compliance needs With AWS KMS you can create master keys that can never be exported from the service You use the master keys to encrypt and decrypt data based on policies that you define Amazon EBS Encryption Amazon Elastic Block Store (Amazon EBS ) encryption offers you a simple encryption solution for your EBS volumes requiring you to build maintain and secure your own key management infrastructure When you create an encrypted EBS volume and attach it to a supported instance type the following types of data are encrypted: • Data at rest inside the volume • All data in transit between the volume and the instance • All snapshots c reated from the volume The encryption occurs on the servers that host EC2 instances providing encryption of data in transit from EC2 instances to Amazon EBS storage EC2 Instance Store Encryption The data on NVMe instance storage is encrypted using an XTS AES 256 block cipher implemented in a hardware module on the instance The encryption keys are generated using the hardware module and are unique to each NVMe instance storage device All encryption keys are destroyed when the instance is stopped or termi nated and cannot be ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 41 recovered You cannot disable this encryption and you cannot provide your own encryption key 1 Amazon S3 Encryption When you u se encryption with Amazon S3 Amazon S3 encrypts your data at the object level Amazon S3 writes the data to disks in AWS data centers and decrypts your data when you access it As long as you authenticate your request and you have access permissions there is no difference in how you access encrypted or unencrypted objects AWS KMS uses customer master keys (CMKs) to encrypt your Amazon S3 objects You use AWS KMS via the Encryption Keys section in the AWS Identity and Access management (AWS IAM) console or via AWS KMS APIs to create encryption keys define the policies that control how keys can be used and audit key usage to ensure that they are used correctl y You can use these keys to protect your data in Amazon S3 buckets Server side encryption with AWS KMS managed keys ( SSEKMS ) provides the following : • You can choose to create and manage encryption keys yourself or you can choose to generate a unique default service key on a customer /service /region level • The ETag in the response is not the MD5 of the object data • The data keys used to encrypt your data are also encrypted and stored alongside the data they protect • You can create rotate and disable auditable master keys in the IAM console • The security controls in AWS KMS can help you meet encryption related compliance requirements If you require server side encryption for all objects that are stored in your bucket Amazon S3 supports bucket policies t hat can be used to enforce encryption of all incoming S3 objects Because access to Amazon S3 is provided over HTTP endpoints you can also leverage bucket policies to ensure that all data transfer in and out occurs over a TLS connection to guarantee that data is also encrypted in transit ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 42 Governance and Monitoring AWS provides several services that you can use to enforce governance and monitor your AWS C loud deployment: AWS Identity and Access Management ( IAM) – Enables you to securely control access to AWS services and resources for your users Using IAM you can create and manage AWS users and groups and use permissions to allow and deny their access to AWS resources For more information refer to the AWS IAM User Guide Amazon CloudWatch – Enables you to monitor your AWS resources in near real time including EC2 instances EBS volumes and S3 buckets Metrics such as CPU utilization latency and request counts are provided automatically for these AWS resources You can also provide CloudWatch access to your own logs or custom application and system metrics such as memory usage transaction volumes or error rates and CloudWatch can monitor these too For more information refer to the Amazon CloudWatch User Guide Amazon CloudWatch Logs – Use to monitor store and access your log files from E C2 instances AWS CloudTrail and other sources You can then retrieve the associated log data from CloudWatch Logs You can create alarms in CloudWatch and receive notifications of particular API activity as captured by CloudTrail and use the notification to perform troubleshooting For more information refer to the Amazon CloudWatch Log User Guide AWS CloudTrail – Enables you to l og continuously monitor a nd retain events related to API calls across your AWS infrastructure CloudTrail provides a history of AWS API calls for your account including API calls made through the AWS Management Console AWS SDKs command line tools and other AWS services For mo re information refer to the AWS Cloud Trail User Guide Amazon Macie – Amazon Macie is a security service that uses machine learning to automatically discover classify and protect sensitive data in AWS Amazon Macie recognizes sensitive data such as personally identifiable information (PII) or intellectual property and provides you with dashboards and alerts that give visibility into ho w this data is being accessed or moved The fully managed service continuously monitors data access activity for anomalies and generates detailed alerts when it detects risk of unauthorized access or inadvertent data leaks ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 43 AWS GuardDuty – Amazon GuardDut y is a threat detection service that continuously monitors for malicious or unauthorized behavior to help you protect your AWS accounts and workloads It monitors for activity such as unusual API calls or potentially unauthorized deployments that indicate a possible account compromise GuardDuty also detects potentially compromised instances or reconnaissance by attackers AWS Shield – AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS AWS Shield provides always on detection and automatic inline mitigations that minimize application downtime and latency so there is no need to engage AWS Support to benefit from DDoS protection AWS Config – Use to assess audit and evaluate the config urations of your AWS resources AWS Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations For more information refer to the AWS Config Developer Guide AWS Organizations – Offers policy based management for multiple AWS accounts With Organizations you can create Service Control Policies (SCPs) tha t centrally control AWS service use across multiple AWS accounts Organizations helps simplify the billing for multiple accounts by enabling you to setup a single payment method for all the accounts in your organization through consolidated billing You ca n ensure that entities in your accounts can use only the services that meet your corporate security and compliance policy requirements For more information refer to the AWS Organizations User Guide AWS Service Catalog – AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS These IT services can include everything from virtual machine images servers software and databases to complete multi tier application architectures AWS Service Catalog allows you to centrally manage commonly deployed IT services and helps you achieve consistent governance and meet your compliance requirements while enabling users to quickly deploy only the approved IT services they need ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 44 Contributors The following individuals contributed to this document: • Mark Duffield Worldwide Tech Leader Semiconductors Amazon Web Services • David Pellerin Principal Business Development for Infotech/Semiconductor Amazon Web Services • Matt Morris Senior HPC Solutions Architect Amazon Web Services • Nafea Bshara VP/Distinguished Engineer Amazon Web S ervices Document Revisions Date Description September 2018 2018 update October 2017 First publication ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 45 Appendix A – Optimizing Storage There are many storage options on AWS and some have already been covered at a high level As semiconductor workloads rely on shared storage building an NFS server may be the fi rst step to running EDA tools This section includes two possible NFS architectures that can achieve suitable performance for most workloads NFS Storage NFS server capabl e of 175 GB/s with 75000 IOP S 6 EBS Vol 20K IOPS Each ZFS RAID6 pool using EBS vols 25 Gpbs ENA connection 6 x EBS Provisioned IOPS25 GpbsNFS Clients Running EDA Toolsr416xlarge NFS Server for Tools Project Data etcArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 46 NFS server capable of 25 GB/s and > 100000 IOPS i316xlarge 8 x NVMe Volumes RAID0 Pool with mdadm EXT4 file system 25 Gpbs ENA connection NFS Server for Temporary/Scratch data 25 GpbsNFS Clients Running EDA ToolsArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 47 Appendix B – Reference Architecture The following diagram represents a common architecture for an elastic EDA computing environment in AWS This design provides the f ollowing key infrastructure components: • Amazon EC2 AutoScaling Group for elasticity • AWS Direct Connect for dedicated connectivity to AWS • Amazon Linux WorkSpaces for remote desktops • Amazon EC2 based compute license and scheduler instances • Amazon EC2 based NFS servers and Amazon EFS for sharing file systems between compute instances ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 48 Figure 5: EDA architecture on AWS Corporate Data Center EDA AutoScaling Group Amazon AI Services EFS S3 BucketRemote DesktopInternetHome Office Coffee Shop or Customer Site AWS Direct Connect /tools (NFS) /project (NFS) /scratch (NFS)License ServerJob SubmitArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 49 Appendix C – Updating the Linux Kernel Command Line Update a system with /etc/default/grub file 1 Open the /etc /default/grub file with your editor of choice $ sudo vim /etc/default/grub 2 Edit the GRUB_CMDLINE_LINUX_DEFAULT line and make necessary changes For example: GRUB_CMDLINE_LINUX_DEFAULT="cons ole=tty0 console=ttyS0115200n8 netifnames=0 biosdevname=0 nvm e_coreio_timeout=4294967295 intel_idlemax_cstate=1 " 3 Save the file and exit your editor 4 Run the following command to rebuild the boot configuration $ grub2mkconfig o /boot/grub2/grubcfg 5 Reboot your instance to enable the new kernel option $ sudo reboot ArchivedAmazon Web Services – Optimizing EDA Workflows on AWS Page 50 Update a system with /boot/grub/grubcon f file 1 Open the /boot/grub/grubconf file with your editor of choice $ sudo vim /boot/grub/grubconf 2 Edit the kernel line for example (some info removed for clarity) # created by imagebuilder default=0 timeout=1 hiddenmenu title Amazon Linux 201409 (31426 2446amzn1x86_64) root (hd00) kernel /boot/vmlinuz veramzn1x86_64 <other_info> intel_idlemax_cstate=1 initrd /boot/initramfs 314262446amzn1x86_64img 3 Save the file and exit your editor 4 Reboot your instance to enable the new kernel option $ sudo reboot Verify Kernel Line Verify that the setting by running dmesg or /proc/cmdline kernel command line: $ dmesg | grep "Kernel command line" [ 0000000] Kernel command line: root=LABEL=/ console=tty1 console=ttyS0 maxcpus=18 xen_nopvspin=1 $ cat /proc/cmdline root=LABEL=/ console=tty1 console=ttyS0 maxcpus=18 xen_nopvspin=1 1 https://docsawsamazoncom/AWSEC2/latest/UserGuide/ssd instance storehtml 2 http://docsawsamazoncom/directoryservice/latest/admin guide/directory_simple_adhtml Notes
General
Comparing_the_Use_of_Amazon_DynamoDB_and_Apache_HBase_for_NoSQL
Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL January 2020 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 Amazon DynamoDB Overview 2 Apache HBase Overview 3 Apache HBase Deployment Options 3 Managed Apache HBase on Amazon EMR (Amazon S3 Storage Mode) 4 Managed Apache HBase on Amazon EMR (HDFS Storage Mode) 4 SelfManaged Apache HBase Deployment Model on Amazon EC2 5 Feature Summary 6 Use Cases 8 Data Models 9 Data Types 15 Indexing 17 Data Processing 21 Throughput Model 21 Consistency Model 23 Transaction Model 23 Table Operations 24 Architecture 25 Amazon DynamoDB Architecture Overview 25 Apache HBase Architecture Overview 26 Partitioning 28 Performance Optimizations 29 Amazon DynamoDB Performance Considerations 29 Apache HBase Performance Considerations 33 Conclusion 37 Contributors 38 Further Reading 38 Document Revisions 38 Abstract One challenge that architects and developers face today is how to process large volumes of data in a timely cost effective and reliable manner There are several NoSQL solutions in the market and choosing the most appropriate one for your partic ular use case can be difficult This paper compares two popular NoSQL data stores —Amazon DynamoDB a fully managed NoSQL cloud database service and Apache HBase an open source column oriented distributed big data store Both Amazon DynamoDB and Apache HBase are available in the Amazon Web Services (AWS) Cloud Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 1 Introduction The AWS Cloud accelera tes big data analytics With access to instant scalability and elasticity on AWS you can focus on analytics instead of infrastructure Whether you are indexing large data sets analyzing massive amounts of scientific data or processing clickstream logs AWS provides a range of big data products and services that you can leverage for virtually any data intensive project There is a wide adoption of NoSQL databases in the growing industry of big data and realtime web applications Amazon DynamoDB and Apach e HBase are examples of NoSQL databases which are highly optimized to yield significant performance benefits over a traditional relational database management system (RDBMS) Both Amazon DynamoDB and Apache HBase can process large volumes of data with hig h performance and throughput Amazon DynamoDB provides a fast fully managed NoSQL database service It lets you offload operating and scaling a highly available distributed database cluster Apache HBase is an open source column oriented distributed bi g data store that runs on the Apache Hadoop framework and is typically deployed on top of the Hadoop Distributed File System (HDFS) which provides a scalab le persistent storage layer In the AWS Cloud you can choose to deploy Apache HBase on Amazon Elastic Compute Cloud (Amazon EC2) and manage it yourself Alternatively you can leverage Apache HBase as a managed service on Amazon EMR a fully managed hosted Hadoo p framework on top of Amazon EC2 With Apache HBase on Amazon EMR you can use Amazon Simple Storage Service (Amazon S3) as a data store using the EMR File System (EMRFS) an implementation of HDFS that all Amazon EMR clusters use for reading and writing regular files from Amazon EMR directly to Amazon S3 The following figure shows the relationsh ip between Amazon DynamoDB Amazon EC2 Amazon EMR Amazon S3 and Apache HBase in the AWS Cloud Both Amazon DynamoDB and Apache HBase have tight integration with popular open source processing frameworks like Apache Hive and Apache Spark to enhance querying capabilities as illustrated in the diagram Amazon Web Services Comparing the Us e of Amazon DynamoDB and Apache HBase for NoSQL Page 2 Figure 1: Relation between Amazon DynamoDB Amazon EC2 Amazon EMR and Apache HBase in the AWS Cloud Amazon Dynam oDB Overview Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability Amazon DynamoDB offers the following benefits: • Zero administrative overhead —Amazon DynamoDB manages the burdens of hardware provisioning setup and configuration replication cluster scaling hardware and software updates and monitoring and handling of hardware failures • Virtually unlimited throughput and scale —The provisioned throughput model of Amazon DynamoDB allows you to specify throughput capacity to serve nearly any level of request traffic With Amazon DynamoDB there is virtually no limit to the amount of data that can be stored and retrieved • Elasticity and flexibility —Amazon DynamoDB can handle unpredictable workloads with predictable performance and still maintain a stable latency profile that shows no latency increase or throughput decrease as the data volume rises with increased usage Amazon DynamoDB lets you increa se or decrease capacity as needed to handle variable workloads Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 3 • Auto matic scaling— Amazon DynamoDB can scale automatically within user defined lower and upper bounds for read and write capacity in response to changes in application traffic These qualitie s render Amazon DynamoDB a suitable choice for online applications with spiky traffic patterns or the potential to go viral anytime • Integration with other AWS services —Amazon DynamoDB integrates seamlessly with other AWS services for logging and monitorin g security analytics and more For more information see the Amazon DynamoDB Developer Guide Apache HBase Overview Apache HBase a Hadoop NoSQL database offers the following benefits: • Efficient storage of sparse data —Apache HBase provides fault tolerant storage for large quantities of sparse data using column based compression Apache HBase is capable of storing and processing billions of rows and millions of columns per row • Store for high frequency counters —Apache HBase is suitable for tasks such as high speed counter aggregation because of its consistent reads and writes • High write throughput and update rates —Apache HBase supports low latency lookups and range scans efficient updates and deletions of individual records and high write throughput • Support for multiple Hadoop jobs —The Apache HBase data store allows data to be used by one or more Hadoop jobs on a single cluster or across multiple Hadoop clusters Apache HBase Deployment Options The following section provides a description of Apache HBase deployment options in the AWS Cloud Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 4 Managed Apache HBase on Amazon EMR (Amazon S3 Storage Mode) Amazon EMR enables you to use Amazon S3 as a data store for Apache HBase using the EMR File System and offers the following benefits : • Separation of compute from storage — You can size your Amazon EMR cluster for compute instead of data requirements allowing you to avoid the need for the customary 3x repli cation in HDFS • Transient clusters —You can scale compute nodes without impacting your underlying storage and terminate your cluster to save costs and quickly restore it • Built in availability and durability —You get the availability and durability of Amazon S3 storage by default • Easy to provision read replicas —You can create and configure a read replica cluster in another Amazon EC2 Availability Zone that provides read only access to the same data as the primary cluster ensuring uninterrupted access to you r data even if the primary cluster becomes unavailable Managed Apache HBase on Amazon EMR (HDFS Storage Mode) Apache HBase on Amazon EMR is optimized to run on AWS and offers the following benefits : • Minimal administrative overhead —Amazon EMR handles provi sioning of Amazon EC2 instances security settings Apache HBase configuration log collection health monitoring and replacement of faulty instances You still have the flexibility to access the underlying infrastructure and customize Apache HBase furthe r if desired • Easy and flexible deployment options —You can deploy Apache HBase on Amazon EMR using the AWS Management Console or by using the AWS Command Line Interface (AWS CLI) Once launched resizing an Apache HBase cluster is easily accomplished with a single API call Activities such as modifying the Apache HBase configuration at launch time or i nstalling third party tools such as Ganglia for monitoring performance metrics are feasible with custom or predefined scripts Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 5 • Unlimited scale —With Apache HBase running on Amazon EMR you can gain significant cloud benefits such as easy scaling low cost pay only for what you use and ease of use as opposed to the self managed deployment model on Amazon EC2 • Integration with other AWS services —Amazon EMR is designed to seamlessly integrate with other AWS serv ices such as Amazon S3 Amazon DynamoDB Amazon EC2 and Amazon CloudWatch • Built in backup feature —A key benefit of Apache HBase running on Amazon EMR is the built in mechanism available for backing up Apache HBase data durably in Amazon S3 Using this f eature you can schedule full or incremental backups and roll back or even restore backups to existing or newly launched clusters anytime SelfManaged Apache HBase Deployment Model on Amazon EC2 The Apache HBase self managed model offers the most flexibi lity in terms of cluster management but also presents the following challenges: • Administrative overhead —You must deal with the administrative burden of provisioning and managing your Apache HBase clusters • Capacity planning —As with any traditional infrast ructure capacity planning is difficult and often prone to significant costly error For example you could over invest and end up paying for unused capacity or under invest and risk performance or availability issues • Memory management —Apache HBase is mai nly memory driven Memory can become a limiting factor as the cluster grows It is important to determine how much memory is needed to run diverse applications on your Apache HBase cluster to prevent nodes from swapping data too often to the disk The numb er of Apache HBase nodes and memory requirements should be planned well in advance • Compute storage and network planning —Other key considerations for effectively operating an Apache HBase cluster include compute storage and network These infrastructur e components often require dedicated Apache Hadoop/Apache HBase administrators with specialized skills Amazon Web Services Comparing the Use of Ama zon DynamoDB and Apache HBase for NoSQL Page 6 Feature Summary Amazon DynamoDB and Apache HBase both possess characteristics that are critical for successfully processing massive amounts of data The following table provides a summary of key features of Amazon DynamoDB and Apache HBase that can help you understand key similarities and differences between the two databases These features are discussed in later sections Table 1: Amazon DynamoDB and Apache HBase Feature Summary Feature Amazon DynamoDB Apache HBase Description Hosted scalable database service by Amazon Column store based on Apache Hadoop and on concepts of BigTable Implementation Language Java Server Operating Systems Hosted Linux Unix Windows Database Model Keyvalue & Document store Wide column store Data Scheme Schema free Schema free Typing Yes No APIs and Other Access Methods Flexible Flexible Supported Programming Languages Multiple Multiple Server side Scripts No Yes Triggers Yes Yes Partitioning Methods Sharding Sharding Throughput Model User provisions throughput Limited to hardware configuration Auto matic Scaling Yes No Partitioning Automatic partitioning Automatic sharding Replication Yes Yes Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 7 Feature Amazon DynamoDB Apache HBase Durability Yes Yes Administration No administration overhead High administration overhead in self managed and minimal on Amazon EMR User Concepts Yes Yes Data Model Row Item – 1 or more attributes Columns/column families Row Size Item size restriction No row size restrictions Primary Key Simple/Composite Row key Foreign Key No No Indexes Optional No built in index model implemented as secondary tables or coprocessors Transactions Row Transactions Itemlevel transactions Single row transactions Multi row Transactions Yes Yes Cross table Transactions Yes Yes Consistency Model Eventually consistent and strongly consistent reads Strongly consistent reads and writes Concurrency Yes Yes Updates Conditional updates Atomic read modify write Integrated Cache Yes Yes Time ToLive (TTL) Yes Yes Encryption at Rest Yes Yes Backup and Restore Yes Yes Point intime Recovery Yes Yes Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 8 Feature Amazon DynamoDB Apache HBase Multiregion Multi master Yes No Use Cases Amazon DynamoDB and Apache HBase are optimized to process massive amounts of data Popular use cases for Amazon DynamoDB and Apache HBase include the following: • Serverless applications —Amazon DynamoDB provides a durable backend for storing data at any scale and has become the de facto database for powering Web and mobile backend s for ecomm erce/retail education and m edia verticals • High volume special events —Special events and seasonal events such as national electoral campaigns are of relatively short duration and have variable workloads with the potential to consume large amounts of resources Amazon DynamoDB lets you increase capacity when you need it and decrease as needed to handle variable workloads This quality renders Amazon DynamoDB a suitable choice for such high volume special events • Social media applications —Community based applications such as online gaming photo sharing location aware applications and so on have unpredictable usage patterns with the potential to go viral anytime The elasticity and flexibility of Amazon DynamoDB make it suitable for such high volume variable workloads • Regulatory and complianc e requirements —Both Amazon DynamoDB and Amazon EMR are in scope of the AWS compliance efforts and therefore suitable for healthcare and financial services workloads as described in AWS Se rvices in Scope by Compliance Program • Batch oriented processing —For large datasets such as log data weather data product catalogs and so on you m ay already have large amounts of historical data that you want to maintain for historical trend analysis but need to ingest and batch process current data for predictive purposes For these types of workloads Apache HBase is a good choice because of its high read and write throughput and efficient storage of sparse data Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 9 • Reporting —To process and report on hi gh volume transactional data such as daily stock market trades Apache HBase is a good choice because it supports high throughput writes and update rates which make it suitable for storage of high frequency counters and complex aggregations • Real time analytics —The payload or message size in event data such as tweets E commerce and so on is relatively small when compared with application logs If you want to ingest streaming event data in real time for sentiment analysis ad serving trend ing analysis and so on Amazon DynamoDB lets you increase throughout capacity when you need it and decrease it when you are done with no downtime Apache HBase can handle realtime ingestion of data such as application logs with ease due to its high write throughput and efficient storage of sparse data Combining this capability with Hadoop's ability to handle sequential reads and scans in a highly optimized way renders Apache HBase a powerful tool for real time data analytics Data Models Amazon Dynam oDB is a key/value as well as a document store and Apac he HBase is a key/value store For a meaningful comparison of Amazon DynamoDB with Apache HBase as a NoSQL data store this document focus es on the key/value data model for Amazon DynamoDB Amazon DynamoDB and Apache HBase are designed with the goal to deliver significant performance benefits with low latency and high throughput To achieve this goal key/value stores and document stores have simpler and less constrained data models than trad itional relational databases Although the fundamental data model building blocks are similar in both Amazon DynamoDB and Apache HBase each database uses a distinct terminology to describe its specific data model At a high level a database is a collecti on of tables and each table is a collection of rows A row can contain one or more columns In most cases NoSQL database tables typically do not require a formal schema except for a mandatory primary key that uniquely identifies each row The following t able illustrates the high level concept of a NoSQL database Table 2: High Level NoSQL Database Table Representation Amazon Web Services Comparing the Use of Amazon Dyna moDB and Apache HBase for NoSQL Page 10 Table Row Primary Key Column 1 Columnar databases are devised to store each column separately so that aggregate operations for one column of the entire table are significantly quicker than the traditional row storage model From a comparative standpoint a row in Amazon DynamoDB is referred to as an item and each item can have any number of attributes An attribute comprises a key and a value and commonly referred to as a name value pair An Amazon DynamoDB table can have unlimited items indexed by primary key as shown in the following example Table 3: High Level Representation of Amazon DynamoDB Table Table Item 1 Primary Key Attribute 1 Attribute 2 Attribute 3 Attribute …n Item 2 Primary Key Attribute 1 Attribute 3 Item n Primary Key Attribute 2 Attribute 3 Amazon DynamoDB defines two types of primary keys: a simple primary key with one attribute called a partition key (Table 4) and a composite primary key with two attributes (Table 5) Table 4: Amazon DynamoDB Simple Primary Key (Partition Key) Table Item Partition Key Attribute 1 Attribute 2 Attribute 3 Attribute …n Table 5: Amazon DynamoDB Composite Primary Key (Partition & Sort Key) Table Item Partition Key Sort Key Attribute 1 Attribute 2 Attribute 3 attribute …n Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 11 A JSON representation of the item in the Table 5 with additional nested attributes is given below: { "Partition Key": "Value" "Sort Key": "Value" "Attribute 1": "Value" "Attribute 2": "Value" "Attribute 3": [ { "Attribute 4": "Value" "Attribute 5": "Value" } { "Attribute 4": "Value" "Attribute 5": "Value" } ] } In Amazon DynamoDB a single attribute primary key or partition key is useful for quick reads and writes of data For example PersonID serves as the partition key in the following Person table Table 6: Example Person Amazon DynamoDB Table Person Table Item PersonId (Partition Key) FirstName LastName Zipcode Gender Item 1 1001 Fname 1 Lname 1 00000 Item 2 1002 Fname 2 Lname 2 M Item 3 2002 Fname 3 Lname 3 10000 F A composite key in Amazon DynamoDB is indexed as a partition key and a sort key This multi part key maintains a hierarchy between the first and second element values Holding the partition key element constant facilitates searches across the sort key elem ent to retrieve items quickly for a given partition key In the following GameScores table the composite partition sort key is a combination of PersonId (partition key) and GameId (sort key ) Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 12 Table 7: Example GameScores Amazon Dyna moDB Table GameScores Table PersonId (Partition Key) GameId (Sort Key) TopScore TopScoreDate Wins Losses item1 1001 Game01 67453 201312 09:17:24:31 73 21 item2 1001 Game02 98567 2013 12 11:14:14:37 98 27 Item3 1002 Game01 43876 2013 12 15:19:24:39 12 23 Item4 2002 Game02 65689 2013 10 01:17:14:41 23 54 The partition key of an item is also known as its hash attribute and sort key as its range attribute The term hash attribute arises from the use of an internal hash function that takes the value of the partition key as input and the output of that hash funct ion determines the partition or physical storage node where the item will be stored The term range attribute derives from the way DynamoDB stores items with the same partition key together in sorted order by the sort key value Although there is no expli cit limit on the number of attributes associated with an individual item in an Amazon DynamoDB table there are restrictions on the aggregate size of an item or payload including all attribute names and values A small payload can potentially improve perf ormance and reduce costs because it requires fewer resources to process For information on how to handle items that exceed the maximum item size see Best Practices for Storing Large Items and Attributes In Apache HBase the most basic unit is a column One or more columns form a row Each row is addressed uniquely by a primary key referred to as a row key A row in Apache HBase can have millions of columns Each column can have multiple versions with each distinct value contained in a separate cell One fundamental modeling concept in Apache HBase is that of a column family A column family is a container for grouping sets of related data together within on e table as shown in the following example Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 13 Table 8: Apache HBase Row Representation Table Column Family 1 Column Family 2 Column Family 3 row row key Column 1 Column 2 Column 3 Column 4 Column 5 Column 6 Apache HBase groups columns with the same general access patterns and size characteristics into column families to form a basic unit of separation For example in the following Person table you can group personal data into one column family called personal_info and the statistical data into a demographic column family Any other columns in the table would be grouped accordingly as well as shown in the following example Table 9: Example Person Table in Apache HBase Person Table personal_info demographic row key firstname lastname zipcode gender row 1 1001 Fname 1 Lname 1 00000 row 2 1002 Fname 2 Lname 2 M row 3 2002 Fname 3 Lname 3 10000 F Columns are addressed as a combination of the column family name and the column qualifier expressed as family:qualifier All members of a column family have the same prefix In the preceding example the firstname and lastname column qualifiers can be refe renced as personal_info:firstname and personal_info:lastname respectively Column families allow you to fetch only those columns that are required by a query All members of a column family are physically stored together on a disk This means that optimiz ation features such as performance tunings compression encodings and so on can be scoped at the column family level Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 14 The row key is a combination of user and game identifiers in the following Apache HBase GameScores table A row key can consist of mult iple parts concatenated to provide an immutable way of referring to entities From an Apache HBase modeling perspective the resulting table is tallnarrow This is because the table has few columns relative to the number of rows as shown in the following example Table 10: TallNarrow GameScores Apache HBase Table GameScores Table top_scores metrics row key score date wins loses row 1 1001 game01 67453 2013 12 09:17:24:31 73 21 row 2 1001 game02 98567 2013 12 11:14:14:37 98 27 row 3 1002 game01 43876 2013 12 15:19:24:39 12 23 row 4 2002 game02 65689 2013 10 01:17:14:41 23 54 Alternatively you can model the game identifier as a column qualifier in Apache HBase This approach facilitates precise column lookups and supports usage of filters to read data The result is a flatwide table with few rows relative to the number of col umns This concept of a flat wide Apache HBase table is shown in the following table Table 11: Flat Wide GameScores Apache HBase Table GameScores Table top_scores metrics row key gameId score top_score_date gameId wins loses row 1 1001 game01 98567 2013 12 11:14:14:37 game01 98 27 game02 43876 2013 12 15:19:24:39 game02 12 23 Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 15 GameScores Table row 2 1002 game01 67453 2013 12 09:17:24:31 game01 73 21 row 3 2002 game02 65689 2013 10 01:17:14:41 game02 23 54 For performance reasons it is important to keep the number of column families in your Apache HBase schema low Anything above three column families can potentially degrade performance The recommended best practice is to maintain a one column family in your s chemas and introduce a two column family and three column family only if data access is limited to a one column family at a time Note that Apache HBase does not impose any restrictions on row size Data Types Both Amazon DynamoDB and Apache HBase support unstructured datasets with a wide range of data types Amazon DynamoDB supports the data types shown in the following table: Table 12: Amazon DynamoDB Data Types Type Description Example (JSON Format) Scalar String Unicode with UTF8 binary encoding {"S": "Game01"} Number Positive or negative exact value decimals and integers {"N": "67453"} Binary Encoded sequence of bytes {"B": "dGhpcyB0ZXh0IGlzIGJhc2U2NC1l"} Boolean True or false {"BOOL": true} Null Unknown or undefined state {"NULL": true} Document List Ordered collection of values {"L": ["Game01" 67453]} Amazon Web Services Comparing the Use of Amazon DynamoDB and Apa che HBase for NoSQL Page 16 Type Description Example (JSON Format) Map Unordered collection of name value pairs {"M": {"GameId": {"S": "Game01"} "TopScore": {"N": "67453"}}} Multi valued String Set Unique set of strings {"SS": ["Black""Green] } Number Set Unique set of numbers {"NS": ["422"" 1987"] } Binary Set Unique set of binary values {"BS": ["U3Vubnk=""UmFpbnk=] } Each Amazon DynamoDB attribute can be a name value pair with exactly one value (scalar type) a complex data structure with nested attributes (document type) or a unique set of values (multi valued set type) Individual items in an Amazon DynamoDB table c an have any number of attributes Primary key attributes can only be scalar types with a single value and the only data types allowed are string number or binary Binary type attributes can store any binary data for example compressed data encrypted data or even images Map is ideal for storing JSON documents in Amazon DynamoDB For example in Table 6 Person could be represented as a map of person id that maps to detailed information about the person: name gender and a list of their previous a ddresses also represented as a map This is illustrated in the following script : { "PersonId": 1001 "FirstName": "Fname 1" "LastName": "Lname 1" "Gender": "M" "Addresses": [ { "Street": "Main S t" "City": "Seattle" "Zipcode": 98005 "Type": "current" } { "Street": "9th S t" "City": Seattle "Zipcode": 98005 "Type": "past" Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 17 } ] } In summary Apache HBase defines the following concepts: • Row —An atomic byte array or key/value container • Column—A key with in the key/value container inside a row • Column Family —Divides columns into related subsets of data that are stored together on disk • Timestamp —Apache HBase adds the concept of a fourth dimension column that is expressed as an explicit or implicit timestam p A timestamp is usually represented as a long integer in milliseconds • Value—A time versioned value in the key/value container This means that a cell can contain multiple versions of a value that can change over time Versions are stored in decreasing t imestamp with the most recent first Apache HBase supports a bytes in/bytes out interface This means that anything that can be converted into an array of bytes can be stored as a value Input could be strings numbers complex objects or even images as long as they can be rendered as bytes Consequently key/value pairs in Apache HBase are arbitrary arrays of bytes Because row keys and column qualifiers are also arbitrary arrays of bytes almost anything can serve as a row key or column qualifier from strings to binary representations of longs or even serialized data structures Column family names must comprise printable characters in human readable format This is because column family names are used as part of the directory name in the file system Furthermore column families must be declared up front at the time of schema definition Column qualifiers are not subjected to this restriction and can comprise any arbitrary binary characters and be created at runtime Indexing In general data i s indexed using a primary key for fast retrieval in both Amazon DynamoDB and Apache HBase Secondary indexes extend the basic indexing functionality and provide an alternate query path in addition to queries against the primary key Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 18 Amazon DynamoDB support s two kinds of secondary indexes on a table that already implements a partition and sort key : • Global secondary index —An index with a partition and optional sort key that can be different from those on the table • Local secondary index —An index that has the same partition key as the table but a different sort key You can define one or more global secondary indexes and one or more local secondary indexes per table For documents you can create a local secondary index or global secondary index on any top level JSON element In the example GameScores table introduced in the preceding section you can define LeaderBoardIndex as a global secondary index as follows: Table 13: Example Global Secondary Index in Amazon DynamoDB LeaderBoardIndex Index Key Attribute 1 GameId (Partition Key) TopScore (Sort Key) PersonId Game01 98567 1001 Game02 43876 1001 Game01 65689 1002 Game02 67453 2002 The LeaderBoardIndex shown in Table 13 defines GameId as its primary key and TopScore as its sort key It is not necessary for the index key to contain any of the key attributes from the source table However the table’s primary key attributes are always present in the global secondary index In this example PersonId is automatically projected or copied into the index With LeaderBoardIndex defined you can easily obtain a list of top scores for a specific game by simply querying it The output is ordered by TopScore the sort key You can choose to project additional attributes from the source table into the index A local secondary index on the other hand organizes data by the index sort key It provides an alternate query pat h for efficiently accessing data using a different sort key Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 19 You can define PersonTopScoresIndex as a local secondary index for the example GameScores table introduced in the preceding section The index contains the same partition key PersonId as the source table and defines TopScoreDate as its new sort key The old sort key value from the source table (in this example GameId ) is automatically projected or copied into the index but it is not a part of the index key as shown in the following table Table 14: Local Secondary Index in Amazon Dynamo DB PersonTopScoresIndex Index Key Attribute1 Attribute2 PersonId (Partition Key) TopScoreDate (New Sort Key) GameId (Old Sort Key as attribute) TopScore (Optional projected attribute) 1001 2013 12 09:17:24:31 Game01 67453 1001 2013 12 11:14:14:37 Game02 98567 1002 2013 12 15:19:24:39 Game01 43876 2002 2013 10 01:17:14:41 Game02 65689 A local secondary index is a sparse index An index will only have an item if the index sort key attribute has a value With local secondary indexes any group of items that have the same partition key value in a table and all their associated local secondary indexes form an item collection There is a size restriction on item collections in a DynamoDB table For more infor mation see Item Collection Size Limit The main difference between a global secondary index and a local secondary index is that a global secondary index def ines a completely new partition key and optional sort index on a table You can define any attribute as the partition key for the global secondary index as long as its data type is scalar rather than a multi value set Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBas e for NoSQL Page 20 Additional highlights between global and local secondary indexes are captured in the following table Table 15: Global and secondary indexes Global Secondary Indexes Local Secondary Indexes Creation Can be created for existing tables (Online indexing supported) Only at table creation time (Online indexing not supported) Primary Key Values Need not be unique Must be unique Partition Key Different from primary table Same as primary table Sort Key Optional Required (different from Primary table) Provisioned Throughput Independent from primary table Dependent on primary table Writes Asynchronous Synchronous For more information on global and local secondary indexes in Amazon DynamoDB see Improving Data Access with Secondary Indexes In Apache HBase all row s are always sorted lexicographically by row key The sort is byteordered This means that each row key is compared on a binary level byte by byte from left to right Row keys are always unique and act as the primary index in Apache HBase Although Apac he HBase does not have native support for built in indexing models such as Amazon DynamoDB you can implement custom secondary indexes to serve as alternate query paths by using these techniques: • Create an index in another table —You can maintain a secondary table that is periodically updated However depending on the load strategy the risk with this method is that the secondary index can potentially become out of sync with the main table You can mitigate this risk if you build the secondary index while publishing data to the cluster and perform concurrent writes into the index table • Use the coprocessor framework —You can leverage the coprocessor framework to implement custom secondary indexes Coprocessors act like triggers that are similar to sto red procedures in RDBMS Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 21 • Use Apache Phoenix —Acts as a front end to Apache HBase to convert standard SQL into native HBase scans and queries and for secondary indexing In summary both Amazon DynamoDB and Apache HBase define data models that allow efficient storage of data to optimize query performance Amazon DynamoDB imposes a restriction on its item size to allow efficient processing and reduce costs Apache HBase uses the concept of column families to provide data locality for more efficient read operations Amazon DynamoDB supports both scalar and multi valued sets to accommodate a wide range of unstructured datasets Similarly Apache HBase stores its key/value pairs as arbitrary arrays of bytes giving it the flexibility to store any data type Amazon DynamoDB supports built in secondary indexes and automatically updates and synchronizes all indexes with their parent tables With Apache HBase you can implement and ma nage custom secondary indexes yourself From a data model perspective you can choose Amazon DynamoDB if your item size is relatively small Although Amazon DynamoDB provides a number of options to overcome row size restrictions Apache HBase is better equ ipped to handle large complex payloads with minimal restrictions Data Processing This section highlights foundational elements for processing and querying data within Amazon DynamoDB and Apache HBase Throughput Model Amazon DynamoDB uses a provisioned th roughput model to process data With this model you can specify your read and write capacity needs in terms of number of input operations per second that a table is expected to achieve During table creation time Amazon DynamoDB automatically partitions and reserves the appropriate amount of resources to meet your specified throughput requirements Automatic scaling for Amazon DynamoD B automate s capacity management and eliminates the guesswork involved in provisioning adequate capacity when creating new tables and global secondary indexes With automatic scaling enabled you can specify percent target utilization and DynamoDB will scale the provisioned capacity for reads and writes within the bounds to meet the target utilization percent For more information see Managing Throughput Capacity Automatically with DynamoDB Auto Scaling Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase fo r NoSQL Page 22 To decide on the required read and write throughput values for a table without auto scaling feature enabled consider the following factors: • Item size —The read and write capacity units that you specify are based on a predefined data item size per read or per write operation For more information about provisioned throughput data item size restrictions see Provisioned Throughput in Am azon DynamoDB • Expected read and write request rates —You must also determine the expected number of read and write operations your application will perform against the table per second • Consistency —Whether your application requires strongly consistent or eventually consistent reads is a factor in determining how many read capacity units you need to provision for your table For more information about consistency and Amazon DynamoDB see the Consistency Model section in this document • Global secondary indexes —The provisioned throughput settings of a global secondary index are separate from those of its parent table Therefore you must also consider the expe cted workload on the global secondary index when specifying the read and write capacity at index creation time • Local secondary indexes —Queries against indexes consume provisioned read throughput For more information see Provisioned Throughput Considerations for Local Secondary Indexes Although read and write requirements are specified at table creation time Amazon DynamoDB lets yo u increase or decrease the provisioned throughput to accommodate load with no downtime With Apache HBase the number of nodes in a cluster can be driven by the required throughput for reads and/or writes The available throughput on a given node can vary depending on the data specifically: • Key/value sizes • Data access patterns • Cache hit rates • Node and system configuration Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 23 You should plan for peak load if load will likely be the primary factor that increases node count within an Apache HBase cluster Consistency Model A database consistency model determines the manner and timing in which a successful write or update is reflected in a subsequent read operation of that same value Amazon DynamoDB lets you specify the desired consistency characteristics f or each read request within an application You can specify whether a read is eventually consistent or strongly consistent The eventual consistency option is the default in Amazon DynamoDB and maximizes the read throughput However an eventually consiste nt read might not always reflect the results of a recently completed write Consistency across all copies of data is usually reached within a second A strongly consistent read in Amazon DynamoDB returns a result that reflects all writes that received a su ccessful response prior to the read To get a strongly consistent read result you can specify optional parameters in a request It takes more resources to process a strongly consistent read than an eventually consistent read For more information about re ad consistency see Data Read and Consistency Considerations Apache HBase reads and writes are strongly consistent This means that all reads and writes to a single row in Apache HBase are atomic Each concurrent reader and writer can make safe assumptions about the state of a row Multi versioning and time stamping in Apache HBase contribute to its strongly consistent model Transaction Model Unlike RDBMS NoSQL databases typically have no domain specific language such as SQL to query data Amazon DynamoDB and Apache HBase provide simple application programming interfaces (APIs) to perform the standard create read update and delete (CRUD) o perations Amazon DynamoDB Transactions support coordinated all ornothing changes to multiple items both within and across tables Transactions provide atomicity consistency isolation and durability (ACID) in DynamoDB helping you to maintain data correctness in your applications Apache HBase integrates with Apache Phoenix to add cross row and cross table transaction support with full ACID semantics Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for No SQL Page 24 Amazon DynamoDB provides atomic item and attribute operations for adding updating or deleting data Further item level transactions can specify a condition that must be satisfied before that transaction is fulfilled For example you can choose to update an item only if it already has a ce rtain value Conditional operations allow you to implement optimistic concurrency control systems on Amazon DynamoDB For conditional updates Amazon DynamoDB allows atomic increment and decrement operations on existing scalar values without interfering wi th other write requests For more information about conditional operations see Conditional Writes Apache HBase also supports atomic high update rates (the classic read modify write) within a single row key enabling storage for high frequency counters Unlike Amazon DynamoDB Apache HBase uses multi version concurrency control to implement updates This means that an existing piece of data is not overwritten with a new one; instead it becomes obsolete when a newer version is added Row data access in Apache HBase is atomic and includes any number of columns but there are no further guarantees or transactional feat ures spanning multiple rows Similar to Amazon DynamoDB Apache HBase supports only single row transactions Amazon DynamoDB has an optional feature DynamoDB Streams to capture table activity The data modification events such as add update or delete c an be captured in near real time in a time ordered sequence If stream is enabled on a DynamoDB table each event gets recorded as a stream record along with name of the table event timestamp and other metadata For more information see the section on Capturing Table Activity with DynamoDB Streams Amazon DynamoDB Streams can be u sed with AWS Lambda to create trigger code that executes automatically whenever an event of interest (add update delete) appears in a stream This pattern enables powerful solutions such as data replication within and across AWS Regions materialized views of data in DynamoDB tables data analysis using Amazon Kinesis notifications via Amazon Simple Notification Service (Amazon SNS) or Amazon Simple Email Service (Amazon SES) and much more For more information see DynamoDB Streams and AWS Lambda Triggers Table Operations Amazon D ynamoDB and Apache HBase provide scan operations to support large scale analytical processing A scan operation is similar to cursors in RDBMS By taking advantage of the underlying sequential sorted storage layout a scan operation can Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 25 easily iterate ove r wide ranges of records or entire tables Applying filters to a scan operation can effectively narrow the result set and optimize performance Amazon DynamoDB uses parallel scanning to improve performance of a scan operation A parallel scan logically sub divides an Amazon DynamoDB table into multiple segments and then processes each segment in parallel Rather than using the default scan operation in Apache HBase you can implement a custom parallel scan by means of the API to read rows in parallel Both Amazon DynamoDB and Apache HBase provide a Query API for complex query processing in addition to the scan operation The Query API in Amazon DynamoDB is accessible only in tables that define a composite primary key In Apache HBase bloom filters improve Get operations and the potential performance gain increases with the number of parallel reads In summary Amazon DynamoDB and Apache HBase have similar data processin g models in that they both support only atomic single row transactions Both databases also provide batch operations for bulk data processing across multiple rows and tables One key difference between the two databases is the flexible provisioned throughp ut model of Amazon DynamoDB The ability to increase capacity when you need it and decrease it when you are done is useful for processing variable workloads with unpredictable peaks For workloads that need high update rates to perform data aggregations or maintain counters Apache HBase is a good choice This is because Apache HBase supports a multi version concurrency control mechanism which contributes to its strongly consistent reads and writes Amazon DynamoDB gives you the flexibility to specify whet her you want your read request to be eventually consistent or strongly consistent depending on your specific workload Architecture This section summarizes key architectural components of Amazon DynamoDB and Apache HBase Amazon DynamoDB Architecture Overv iew At a high level Amazon DynamoDB is designed for high availability durability and consistently low latency (typically in the single digit milliseconds) performance Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 26 Amazon DynamoDB runs on a fleet of AWS managed servers that leverage solid state drives (SSDs) to create an optimized high density storage platform This platform decouples performance from table size and eliminates the need for the working set of data to fit in memory while still returning consistent low latency responses to queries As a managed service Amazon DynamoDB abstracts its underlying architectural details from the user Apache HBase Architecture Overview Apache HBase is typically deployed on top of HDFS Apache ZooKeeper is a critical component for maintai ning configuration information and managing the entire Apache HBase cluster The three major Apache HBa se components are the following: • Client API — Provides programmatic access to D ata Manipulation Language (DML) for performing CRUD operations on HBase tables • Region servers — HBase tables are split into regions and are served by region servers • Master server — Responsible for monitoring all region server instan ces in the cluster and is the interface for all metadata changes Apache HBase stores data in indexed store files called HFiles on HDFS The store files are sequences of blocks with a block index stored at the end for fast lookups The store files provide an API to access specific values as well as to scan ranges of values given a start and end key During a write operation data is first written to a commit log called a write ahead log (WAL) and then moved into memory in a structure called Memstore When the size of the Memstore exceeds a given maximum value it is flushed as a HFile to disk Each time data is flushed from Memstores to disk new HFiles must be created As the number of HFiles builds up a compaction process merges the files into fewer lar ger files A read operation essentially is a merge of data stored in the Memstores and in the HFiles The WAL is never used in the read operation It is meant only for recovery purposes if a server crashes before writing the in memory data to disk A regio n in Apache HBase acts as a store per column family Each region contains contiguous ranges of rows stored together Regions can be merged to reduce the Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 27 number of store files A large store file that exceeds the configured maximum store file size can trigg er a region split A region server can serve multiple regions Each region is mapped to exactly one region server Region servers handle reads and writes as well as keeping data in memory until enough is collected to warrant a flush Clients communicate d irectly with region servers to handle all data related operations The master server is responsible for monitoring and assigning regions to region servers and uses Apache ZooKeeper to facilitate this task Apache ZooKeeper also serves as a registry for reg ion servers and a bootstrap location for region discovery The master server is also responsible for handling critical functions such as load balancing of regions across region servers region server failover and completing region splits but it is not pa rt of the actual data storage or retrieval path You can run Apache HBase in a multi master environment All masters compete to run the cluster in a multi master mode However if the active master shuts down then the remaining masters contend to take ove r the master role Apache HBase on Amazon EMR Architecture Overview Amazon EMR defines the concept of instance groups which are collections of Amazon EC2 instances The Amazon EC2 virtual servers perform roles analogous to the master and slave nodes of Hadoop For best performance Apache HBase clusters should run on at least two Amazon EC2 instances There are three types of instance groups in an Amaz on EMR cluster • Master —Contains one master node that manages the cluster You can use the Secure Shell (SSH) protocol to access the master node if you want to view logs or administer the cluster yourself The master node runs the Apache HBase master server and Apache ZooKeeper • Core —Contains one or more core nodes that run HDFS and store data The core nodes run the Apache HBase region servers • Task —(Optional) Contains any number of task nodes Managed Apache HBase on Amazon EMR (Amazon S3 Storage Mode) When you run Apache HBase on Amazon EMR with Amazon S3 storage mode enabled the HBase root directory is stored in Amazon S3 including HBase store files Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 28 and table metadata For more information see HBase on Amazon S3 (Amazon S3 Storage Mode) For production workloads EMRFS consistent view is recommended when you enable HBase on Amazon S3 Not usin g consistent view may result in performance impacts for specific operations Partitioning Amazon DynamoDB stores three geographically distributed replicas of each table to enable high availability and data durability within a region Data is auto partitioned primarily using the partition key As throughput and data size increase Amazon DynamoDB will automatically repartition and reallocate data across more nodes Partitions in Amazon DynamoDB are fully independent resulting in a shared nothing cluster However provisioned throughput is divided evenly across the partiti ons A region is the basic unit of scalability and load balancing in Apache HBase Region splitting and subsequent load balancing follow this sequence of events: 1 Initially there is only one region for a table and as more data is added to it the system monitors the load to ensure that the configured maximum size is not exceeded 2 If the region size exceeds the configured limit the system dynamically splits the region into two at the row key in the middle of the region creating two roughly equal halves 3 The master then schedules the new regions to be moved off to other servers for load balancing if required Behind the scenes Apache ZooKeeper tracks all activities that take place during a region split and maintains the state of the region in case of server failure Apache HBase regions are equivalent to range partitions that are used in RDBMS sharding Regions can be spread across many physical servers that consequently distribute the load resulting in scalability In summary as a managed service the architectural details of Amazon DynamoDB are abstracted from you to let you focus on your application details Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 29 With the self managed Apache HBase deployment model it is crucial to u nderstand the underlying architectural details to maximize scalability and performance AWS gives you the option to offload Apache HBase administrative overhead if you opt to launch your cluster on Amazon EMR Performance Optimizations Amazon DynamoDB and Apache HBase are inherently optimized to process large volumes of data with high performance NoSQL databases typically use an on disk column oriented storage format for fast data access and reduced I/O when fulfilling queries This performance characteri stic is evident in both Amazon DynamoDB and Apache HBase Amazon DynamoDB stores items with the same partition key contiguously on disk to optimize fast data retrieval Similarly Apache HBase regions contain contiguous ranges of rows stored together to im prove read operations You can enhance performance even further if you apply techniques that maximize throughput at reduced costs both at the infrastructure and application tiers Tip: A recommended best practice is to monitor Amazon DynamoDB and Apache H Base performance metrics to proactively detect and diagnose performance bottlenecks The following section focuses on several common performance optimizations that are specific to ea ch database or deployment model Amazon DynamoDB Performance Consideration s Performance considerations for Amazon DynamoDB focus on how to define an appropriate read and write throughput and how to design a suitable schema for an application These performance considerations span both infrastruct ure level and application tiers Ondemand Mode – No Capacity Planning Amazon DynamoDB on demand is a flexible billing option capable of serving thousands of requests per second without capacity planning For on demand mode tables you don't need to specify how much read and write through put you expect your application to perform DynamoDB tables using on demand capacity mode automatically adapt to Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 30 your application’s traffic volume On demand capacity mode instantly accommodates up to double the previous peak traffic on a table For more i nformation see On Demand Mode Tip: DynamoDB recommends spacing your traffic growth over at least 30 minutes be fore driving more than 100000 reads per second Provisioned Throughput Considerations Factors that must be taken into consideration when determining the appropriate throughput requirements for an application are item size expected read and write rates consistency and secondary indexes as discussed in the Throughput Model section of this whitepaper If an application performs more reads per second or writes per second than a table’s provisioned throughput capacity a llows requests above the provisioned capacity will be throttled For instance if a table’s write capacity is 1000 units and an application can perform 1500 writes per second for the maximum data item size Amazon DynamoDB will allow only 1000 writes p er second to go through and the extra requests will be throttled Tip: For applications where capacity requirement increases or decreases gradually and the traffic stays at the elevated or depressed level for at least several minutes manage read and write throughput capacity automatically using auto scaling feature With any changes in traffic pattern DynamoDB will scale the provisioned capacity up or down within a specified range to match the desired capacity utilization you enter for a table or a g lobal secondary index Read Performance Considerations With the launch of Amazon DynamoDB Accelerator (DAX) you can now get microsecond access to data that live s in Amazon DynamoDB DAX is an in memory cache in front of DynamoDB and has the identical API as DynamoDB Because reads can be served from the DAX layer for queries with a cache hit and the table will only serve the reads when there is a cache miss th e provisioned read capacity units can be lowered for cost savings Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 31 Tip: Based on the size of your tables and data access pattern consider provisioning a single DAX cluster for multiple smaller tables or multiple DAX clusters for a single bigger table or a hybrid caching strategy that will work best for your application Primary Key Design Considerations Primary key design is critical to the performance of Amazon DynamoDB When storing data Amazon DynamoDB divides a table's items into multiple partitions and distributes the data primarily based on the partition key element The provisioned throughput associated with a table is also divided evenly among the partitions with no sharing of provisioned throughput across partitions Tip: To efficiently use the overall provisioned throughput spread the workload across partition key values For example if a table has a very small number of heavily accessed partition key elements possibly even a single very heavily used partition key element traffic can become concentrated on a single partition and create "hot spots" of read and write activity within a single item collection In extreme cases throttling can occur if a single partition exceeds its maximum capacity To better accommodate uneven access patterns Amazon DynamoDB adaptive capacity enables your application to continue reading and writing to hot partitions without being throttled provided that traffic does not exc eed your table’s total provisioned capacity or the partition maximum capacity Adaptive capacity works by automatically and instantly increasing throughput capacity for partitions that receive more traffic To get the most out of Amazon DynamoDB throughpu t you can build tables where the partition key element has a large number of distinct values Ensure that values are requested fairly uniformly and as randomly as possible The same guidance applies to global secondary indexes Choose partitions and sort keys that provide uniform workloads to achieve the overall provisioned throughput Local Secondary Index Considerations When querying a local secondary index the number of read capacity units consumed depends on how the data is accessed For example whe n you create a local secondary Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 32 index and project non key attributes into the index from the parent table Amazon DynamoDB can retrieve these projected attributes efficiently In addition when you query a local secondary index the query can also retrieve attributes that are not projected into the index Avoid these types of index queries that read attributes that are not projected into the local secondary index Fetching attributes from the parent table that are not specified in the local secondary index c auses additional latency in query responses and incurs a higher provisioned throughput cost Tip: Project frequently accessed non key attributes into a local secondary index to avoid fetches and improve query performance Maintain multiple local secondary indexes in tables that are updated infrequently but are queried using many different criteria to improve query performance This guidance does not apply to tables that experience heavy write activity If very high write activity to the table is e xpected one option to consider is to minimize interference from reads by not reading from the table at all Instead create a global secondary index with a structure that is identical to that of the table and then direct all queries to the index rather t han to the table Global Secondary Index Considerations If a query exceeds the provisioned read capacity of a global secondary index that request will be throttled Similarly if a request performs heavy write activity on the table but a global secondar y index on that table has insufficient write capacity then the write activity on the table will be throttled Tip: For a table write to succeed the provisioned throughput settings for the table and global secondary indexes must have enough write capacity to accommodate the write; otherwise the write will be throttled Global secondary indexes support eventually consistent reads each of which consume one half of a read capacity unit The number of read capacity units is the sum of all projected attribut e sizes across all of the items returned in the index query results With write activities the total provisioned throughput cost for a write consists of the sum of write capacity units consumed by writing to the table and those consumed by updating the g lobal secondary indexes Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 33 Apache HBase Performance Considerations Apache HBase performance tuning spans hardware network Apache HBase configurations Hadoop configurations and the Java Virtual Machine Garbage Collection settings It also includes applyin g best practices when using the client API To optimize performance it is worthwhile to monitor Apache HBase workloads with tools such as Ganglia to identify performance problems early and apply recommended best practices based on observed performance met rics Memory Considerations Memory is the most restrictive element in Apache HBase Performance tuning techniques are focused on optimizing memory consumption From a schema design perspective it is important to bear in mind that every cell stores its val ue as fully qualified with its full row key column family column name and timestamp on disk If row and column names are long the cell value coordinates might become very large and take up more of the Apache HBase allotted memory This can cause severe performance implications especially if the dataset is large Tip: Keep the number of column families small to improve performance and reduce the costs associated with maintaining HFiles on disk Apache HBase Configurations Apache HBase supports built in mechanisms to handle region splits and compactions Split/compaction storms can occur when multiple regions grow at roughly the same rate and eventually split at about the same time This can cause a large spike in disk I/O because of the compactions nee ded to rewrite the split regions Tip: Rather than relying on Apache HBase to automatically split and compact the growing regions you can perform these tasks manually If you handle the splits and compactions manually you can perform them in a time controlled manner and stagger them across all regions to spread the I/O load as much as possible to avoid potential split/compaction storms With the manual option you can further alleviate any problematic split/compaction storms and gain additional performance Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 34 Schema Design A region can run hot when dealing with a write pattern that does not distribute the load across all servers evenly This is a common scenario when dealing with streams processing events with time series data The gradually increa sing nature of time series data can cause all incoming data to be written to the same region This concentrated write activity on a single server can slow down the overall performance of the cluster This is because inserting data is now bound to the perfo rmance of a single machine This problem is easily overcome by employing key design strategies such as the following • Applying salting prefixes to keys; in other words prepending a random number to a row • Randomizing the key with a hash function • Promotin g another field to prefix the row key These techniques can achieve a more evenly distributed load across all servers Client API Considerations There are a number of optimizations to take into consideration when reading or writing data from a client using the Apache HBase API For example when performing a large number of PUT operations you can disable the auto flush feature Otherwise the PUT operations will be sent one at a time to the region server Whenever you use a scan operation to process large numbers of rows use filters to limit the scan scope Using filters can potentially improve performance This is because column over selection can incur a nontrivial performance penalty especially over large data sets Tip: As a recommended best practice set the scanner caching to a value greater than the default of 1 especially if Apache HBase serves as an input source for a MapReduce job Setting the scanner caching value to 500 for example will transfer 500 rows at a time to the client to be proces sed but this might potentially cost more in memory consumption Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 35 Compression Techniques Data compression is an important consideration in Apache HBase production workloads Apache HBase natively supports a number of compression algorithms that you can enab le at the column family level Tip: Enabling compression yields better performance In general compute resources for performing compression and decompression tasks are typically less than the overheard for reading more data from disk Apache HBase on Amazon EMR (HDFS Mode) Apache HBase on Amazon EMR is optimized to run on AWS with minim al administration overhead You still can access the underlying infrastructure and manually configure Ap ache HBase settings if desired Cluster Considerations You can resize an Amazon EMR cluster using core and task nodes You can add more core nodes if desired Task nodes are useful for managing the Amazon EC2 instance capacity of a cluster You can increase capacity to handle peak loads and decrease it later during demand lulls Tip: As a recommended best practice in production workloads you can launch Apache HBase on one cluster and any analysis tools such as Apache Hive on a separate cluster to improve performance Managing two separate clusters ensures that Apache HBase has ready access to the infrastructure resources it requires Amazon EMR provi des a feature to backup Apache HBase data to Amazon S3 You can perform either manual or automated backups with options to perform full or incremental backups as needed Tip: As a best practice every production cluster should always take advantage of the backup feature available on Amazon EMR Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 36 Hadoop and Apache HBase Configurations You can use a bootstrap action to install additional software or change Apache HB ase or Apache Hadoop configuration settings on Amazon EMR Bootstrap actions are scripts that are run on the cluster nodes when Amazon EMR launches the cluster The scripts run before Hadoop starts and before the node begins processing data You can write custom bootstrap actions or use predefined bootstrap actions provided by Amazon EMR For example you can install Ganglia to monitor Apache HBase performance metrics using a predefined bootstrap action on Amazon EMR Apache HBase on Amazon EMR (Amazo n S3 Storage Mode) When you run Apache HBase on Amazon EMR with Amazon S3 storage mode enabled keep in recommended best practices discussed in this section Read Performance Considerations With Amazon S3 storage mode enabled Apache HBase r egion servers us e MemStore to store data writes in memory and use write ahead logs to store data writes in HDFS before the data is written to HBase StoreFiles in Amazon S3 Reading record s directly from th e StoreFile in Amazon S3 results in significantly higher latency a nd higher standard deviation than reading from HDFS Amazon S3 scales to support very high request rates If your request rate grows steadily Amazon S3 automatically partitions your buckets as needed to support higher request rates However the maximum request rates for Amazon S3 are lower than what can be achieved from the local cache For more information about Amazon S3 performance see Performance Optimization For read heavy workloads caching data inmemory or on disk caches in Amazon EC2 instance storage is recommended Because Apache HBase region servers use BlockCache to store data reads in memory and BucketCache to store data reads on EC2 instance storage you can choose an EC2 instance type with sufficient instance store In addition you can add Amazon Elastic Block Store (Amazon EBS) storage to accommodate your required cache size You can increase the BucketCache size on attached instance stores and EBS volumes using the hbasebucketcachesize property Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 37 Write Performance Considerations As discussed in the preceding section t he frequency of MemStore flushes and the number of StoreFiles present during minor and major compactions can contribute significantly to an increase in region server response times and consequently impact write per formance C onsider increasing the size of the MemStore flush and HRegion block multiplier which increases the elapsed time between major compactions for optimal write performance Apache HBase compactions and region servers perform optimally when fewer StoreFiles need to be compacted You may get better performance using larger file block sizes (but less than 5 GB) to trigger Amazon S3 multipart upload functionality in EMRFS In summary whether you are running a managed NoSQL database such as Amazon DynamoDB or Apache HBase on Amazon EMR or manag ing your Apache HBase cluster yourself on Amazon EC2 or on premises you should take performance optimizations into consideration if you want to maximize performance at reduced costs The key difference between a hosted NoSQL solution and managing it yours elf is that a managed solution such as Amazon DynamoDB or Apache HBase on Amazon EMR lets you offload the bulk of the administration overhead so that you can focus on optimizing your application If you are a developer who is getting started with NoSQL Amazon DynamoDB or the hosted Apache HBase on the Amazon EMR solution are suitable options depending on your use case For developers with in depth Apache Hadoop/Apache HBase knowledge who need full control of their Apache HBase clusters the self managed Apache HBase deployment model offers the most flexibility from a cluster management standpoint Conclusion Amazon DynamoDB lets you offload operating and scaling a highly available distributed database cluster making it a suitable choice for today’s rea ltime web based applications As a managed service Apache HBase on Amazon EMR is optimized to run on AWS with minimal administration overhead For advanced users who want to retain full control of their Apache HBase clusters the self managed Apache HBa se deployment model is a good fit Amazon DynamoDB and Apache HBase exhibit inherent characteristics that are critical for successfully processing massive amounts of data With use cases ranging from Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 38 batch oriented processing to real time data serving Ama zon DynamoDB and Apache HBase are both optimized to handle large datasets However knowing your dataset and access patterns are key to choosing the right NoSQL database for your workload Contributors Contributors to this document include : • Wangechi Doble Principal Solutions Architect Amazon Web Services • Ruchika Abbi Solutions Architect Amazon Web Services Further Reading For additional information see: • Amazon DynamoDB Developer Guide • Amazon EC2 User Guide • Amazon EMR Management Guide • Amazon EMR Migration Guide • Amazon S3 Developer Guide • HBase: The Definitive Guide by Lars George • The Apache HBase™ Reference Guide • Dynamo: Amazon’s Highly Available Key value Store Document Revisions Date Description January 2020 Amazon DynamoDB foundational featu res and transaction model updates November 2018 Amazon DynamoDB Apache HBase on EMR and template updates September 2014 First Publication
General
SoftNAS_Architecture_on_AWS
ArchivedSoftNAS Architecture on AWS April 201 7 This paper has been archived For the latest technical content about the AWS Cloud see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchived© 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers SoftNAS and the SoftNAS logo are trademarks or registered trademarks of SoftNAS Inc All rights reserved ArchivedContents Introduction 1 About SoftNAS Cloud 1 Architecture Considerations 1 Application and Data Security 1 Performance 3 Using Amazon S3 with SoftNAS Cloud 9 Network Security 10 Data Protection Considerations 13 SoftNAS Cloud is Copy OnWrite (COW) File System 14 Automatic Error Detection and Correction 14 SoftNAS Cloud Snapshots 15 SoftNAS SnapClones™ 16 Amazon EBS Snapshots 17 Deployment Scenarios 17 HighAvailability Architecture 17 Single Controller Architecture 20 Hybrid Cloud Architecture 21 Automation Options 23 Conclusion 25 Contributors 25 Further Reading 26 SoftNAS References 26 Amazon Web Services References 26 ArchivedAbstract Network Attached Storage (NAS) software is commonly deployed to provide shared file services data protection and high availability to users and applications SoftNAS Cloud a popular NAS solution that can be deployed from the Amazon Web Services (AWS) Marketplace is designed to support a variety of market verticals use cases and workload types Increasingly SoftNAS Cloud is deployed on the AWS platform to enable block and file storage services through Common Internet File System (CIFS) Network File System (NFS) Apple File Protocol (AFP) and Internet Small Computer System Interface (iSCSI) This paper addresses architectural considerations when deploying SoftNAS Cloud on AWS It also provides best practice guidance for security performance high availability and backup ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 1 Introduction Network Attached Storage (NAS) systems enable data and file sharing and are used for businesscritical applications and data management NAS syste ms are optimized to balance performance interoperability data reliability and recoverability Although widely deployed by IT in traditional data center environments NAS software is increasingly used on AWS a flexible cost effective easy touse cloudcomputing platform Deploying NAS on Amazon Elastic Compute Cloud (Amazon EC2) provides a solution for applications that require the benefits of NAS storage in a software form factor1 About SoftNAS Cloud SoftNAS Cloud is a softwaredefined NAS filer delivered as a virtual appliance running within Amazon EC2 SoftNAS Cloud provides NAS capabilities suitable for the enterprise including MultiAvailability Zone (Multi AZ) high availability with automatic failover in the AWS Cloud SoftNAS Cloud which runs within the customer’s AWS account offers businesscritical data protection required for nonstop operation of applications websites and IT infrastructure on AWS This paper doesn’ t cover all SoftNAS Cloud features For more information see wwwsoftnascom 2 Architecture Considerations This section provides information critical to a successful SoftNAS Cloud installation This information includes application an d data security performance interaction with Amazon Simple Storage Service (Amazon S3) 3 and network security Application and Data Security Security and protection of customer data are the highest priorities when working with SoftNAS Cloud on AWS When you use SoftNAS Cloud in conjunction with AWS security features such as Amazon Virtual Private Cloud (Amazon VPC) 4 Amazon VPC Security Groups and AWS Identity and Access Management (IAM) roles you can deploy a secure data storage solution ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 2 SoftNAS Cloud uses the CentOS Linux distribution which is managed updated and patched as part of a normal release cycle You can use SoftNAS StorageCenter ™ the webaccessible SoftNAS Cloud administration console to check the current software revision and apply available updates For security and compliance reasons the SoftNAS technical support team should approve any custom package before it is installed on a SoftNAS Cloud instance Webbased administration through SoftNAS StorageCenter is SSLencrypted and passwordprotected by default Optional twofactor authentication is also available for use You can administer SoftNAS Cloud through SSH and a secure REST API On AWS all SSH sessions use public/private key access control Logging in as root is prohibited Administrative access through the API and command line interface (CLI) over SSH are SSLencrypted and authenticate d Iptables a commonly used software firewall is included with SoftNAS Cloud and can be customized to accommodate more restrictive and finergrained security controls Data access is performed across a private network by Network File System (NFS) Common Internet File System (CIFS) Apple File Protocol (AFP) and Internet Small Computer System Interface (iSCSI) You can also restrict the list or range of client addresses allowed to perform data access SoftNAS Cloud offers encryption options for data security – both in flight and at rest If NFS is used all Linux authentication schemes are available including Network Information Service (NIS) Lightweight Directory Access Protocol (LDAP) Kerberos and restrictions based on the user ID (UID) and group ID (GID) Using CIFS you manage security through SoftNAS StorageCenter facilitating basic Windows user and group permissions Active Directory integration is supported for more advanced user and permissions management in Windows environments The SnapReplicate ™ feature provides blocklevel replication between two SoftNAS Cloud instances SnapReplicate between source and target SoftNAS Cloud instances sends all data through encrypted SSH tunnels and authenticates using RSA (public key infrastructure PKI ) Data is encrypted in transit using industrystandard ciphers The default cipher for encryption is BlowfishCBC selected for its balance of speed and security However you can use any cipher supported by SSH including AES256bitCBC ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 3 SoftNAS Clo ud uses the IAM service to control the SoftNAS Cloud appliance’s access to other AWS services5 IAM roles are designed to allow applications to securely make API calls from an instance without requiring the explicit management and storage of access keys When an IAM role is applied to an EC2 instance the role handles key management rotating keys periodically and making them available to applications through Amazon EC2 metadata Performance The performance of a NAS system on Amazon EC2 depends on many factors including the Amazon EC2 instance type the number and configuration of Amazon Elastic Block Store (Amazon EBS) volumes6 the type of Amazon EBS volume the use of Provisioned IOPS with Amazon EBS and the application workload Benchmark your application on several Amazon EC2 instance types and storage configurations to select the most appropriate configuration SoftNAS Cloud provides Amazon Machine Images (AMIs ) that support both paravirtual (PV) and hardware virtual machine (HVM) virtualization To take advantage of special hardware extensions (CPU network and storage) and for optimal performance SoftNAS recommends that you use a current generation instance type and an HVM AMI with single root input/output virtualization (SRIOV) support To increase the performance of your system you need to know which of the server’s resources is the performance constraint If CPU or memory limits your system performance you can scale up the memory compute and network resources available to the software by choosing a larger Amazon EC2 instance type Use StorageCenter dashboard performance charts and Amazon CloudWatch to monitor your performance and throughput metrics7 Depending on the instance type and size chosen EC2 instances are allocated varying amounts of CPU memory and network capabilities Some instance families have higher ratios of CPU to memory or higher ratios of memory to CPU In general to achieve the best performance from your SoftNAS Cloud virtual appliance select an instance with large amounts of memory up to 70 percent of which will be dedicated to highspeed dynamic randomaccess memory (DRAM ) cache If you require more than 120 MB/s NAS throughput for more demanding use cases select an instance with advanced networking AWS provides instances that support 10 and 20 Gbps network interfaces If available ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 4 choose an EBSoptimized instance which uses a dedicated network path to EBS storage For production workloads SoftNAS recommends starting with a larger EC2 instance size coupled with monitoring of CloudWatch metrics as workloads are increased to their typical levels This ensures applications have sufficient IOPS and throughput as they’re brought online Continue monitoring the application using SoftNAS StorageCenter and CloudWatch metrics in particular CPU and network usage to determine how well the chosen instance size is serving your unique workloads After a period of time (eg 30 days) with your workload in production it will become apparent if the instance is well matched to the production workloads As your load increases if CPU or network usage reaches 75 percent or higher you might need to increase instance si ze to achieve full throughput at low latencies If CPU and network usage are below 40 to 50 percent you can consider decreasing the instance size during a maintenance window to reduce operating costs SoftNAS does not recommend using T1 or T2 instances as they are designed for burstable workloads and can run out of CPU credits during sustained heavy usage At the time of this writing SoftNAS recommends the m42xlarge as a minimum default AWS instance size the m44xlarge for medium workloads and the m410xlarge for heavier workloads as seen in Figure 1 below A SoftNAS representative can help with further sizing guidance About RAM Usage SoftNAS Cloud allocates 50 percent of available RAM for use as Zettabyte File System (ZFS) file system cache Remaining RAM is used by the Linux operating system SoftNAS Cloud processes and NAS services It’s typical to see 80 to 90 percent of RAM allocated and in use ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 5 Later instance families also supported Figure 1: AWS instance to workload If your performance is limited by disk I/O you can make configuration changes to improve the performance of your disk and caching resources Multilevel Cache Readintensive workloads benefit from additional RAM as level 1 cache (ZFS ARC) plus level 2 cache (ZFS L2ARC) Leverage the ephemeral SSD disks attached to certain EC2 instances to provide additional highspeed read cache Because data on ephemeral disks can be lost whenever an EC2 instance stops and restarts or if underlying hardware changes or fails use ephemeral disks only for read cache purposes and never as a write log Amazon EBS Performance Optimizations Because Amazon EBS is connected to an EC2 instance over the network instances with higher network bandwidth can provide more Amazon EBS ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 6 performance Some instance types support the Amazon EBSoptimized flag (ec2:EbsOptimized) This flag provides a dedicated network interface for Amazon EBS bound traffic (storage I/O) and is designed to reduce variability in storage performance due to contention with network I/O The chart here provides an outline of expected bandwidth throughput and Max IOPS per instance type and size8 For SSD based volume types Amazon EBS measures an I/O operation as one that is 256 KB or smaller I/O operations larger than 256 KB are counted in 256 KB increments For example a 1024 KB I/O would count as four 256 KB IOPs Amazon EBS also combines smaller I /O operations into a single operation where possible to achieve higher performance for all volume types Benefits of Each EBS Volume Type and Relevant Storage Application Magnetic Backed Magneticbacked volume types support higher block sizes up to 1024 KB Throughput Optimized HDD (st1) and Cold HDD (sc1) Amazon EBS volume types are based on magnetic storage technology The Throughput Optimized HDD (st1) volume type is designed for sequential read/write workloads (eg big data) It can achieve very hi gh throughput (500 MB/s) for sequential read/write workloads (compared to 160 MB/s and 320 MB/s for SSDbacked gp2 and io1 respectively) Generally big data workloads operate on very large sequential datasets and generate data for storage in a similar way The st1 volume type has a baseline performance of 40 MB/s per terabyte (TB) of allocated storage and like gp2 can burst beyond the baseline performance for a short period of time The Cold HDD (sc1) volume type is designed for high density and infrequent access workloads This volume type is suitable for cold storage (infrequent access) applications where low cost is important Unlike st1 the baseline performance of an sc1 volume is 12 MB/s per TB of allocated storage It ’s important to note that Amazon S3 achieves high availability ( HA) by default within a single region whereas sc1 volumes have to be mirrored across Availability Zones to achieve parity with Amazon S3 in durability and availability of the data (This doubles and triples the cost of sc1 when compared to Amazon S3) Nevertheless depending on certain access patterns (eg cold ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 7 versus warm) of the data the cost of sc1 volumes can be cheaper for certain workloads SSD Backed General Purpose (gp2) and Provisioned IOPS (io1) SSD volumes can achieve faster IOPS performance and very high throughput on random read/write workloads when compared to magnetic disks but at a higher price point However gp2 and io1 volume types are limited to a throughput of  320 MB/s (160 MB/s for gp2 320 MB/s for io1) General Purpose (gp2) volumes provide a fixed 1:3 ratio between gigabytes and IOPS provisioned so a 100 GB General Purpose volume provides a baseline of 300 IOPS Gp2 volumes less than 1 TB in size can also burst for short periods up to 3000 IOPS You can provision General Purpose volumes up to 16 TB and 10000 IOPS Provisioned IOPS (io1) volumes are intended for workloads that demand consistent performance such as databases You can create Provisioned IOPS volumes up to 16 TB and 20000 IOPS Over a year Amazon EBS Provisioned IOPS volumes are designed to deliver within 10 percent of the Provisioned IOPS performance 999 percent of the time There are differences in total throughput capabilities between Provisioned IOPS (io1) and General Purpose SSD (gp2) volumes Io1 volumes are designed to provide up to 320 MB/second of throughput while gp2 volumes are designed to provide up to 160 MB/second RAID If you need more I/O capabilities than a single volume can provide you can create an array of volumes with redundant array of independent disks (RAID ) software to aggregate the performance capabilities of each volume in the array For example a stripe of two 4000 IOPS volumes allows for a theoretical maximum of 8000 IOPS RAID 0 and RAID 10 are the two RAID levels recommended for use with Amazon EBS RAID 0 or striping has the advantage of providing a linear performance increase with every volume added to the array (up to the maximum capabilities of the host instance) Two 4000 IOPS volumes provide 8000 IOPS three ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 8 4000 IOPS volumes provide 12000 IOPS and so on However because RAID 0 does not provide redundancy it has less durability than a single volume It also aggregates the failure rate of each volume in the array RAID 10 is a good compromise because it provides increased redundancy aggregates the read performance of all volumes in the array and provides a mirror of stripes in the array However RAID 10 isn’t without drawbacks There is a 50 percent penalty to write performance and a 50 percent reduction in available storage capacity This penalty is due to half of the disks in the array being reserved for a mirror RAID 10 has the same write penalty as RAID 1 RAID 5 and 6 are not recommended because parity calculations incur significant overhead without dramatically increasing the durability of the volume set Such a large write penalty makes these RAID levels very expensive to run in terms of both dollars and I/O In general RAID using mirroring or parity for increased durability adds extra steps and reduces performance while not necessarily increasing the data’s durability Amazon EBS has its own durability mechanisms It can be supplemented with Amazon S3backed snapshots and SoftNAS replication to more than one Availability Zone DRAM cache can dramatically increase read IOPS performance Choose instances with more memory for the best read IOPS and throughput For an even larger read cache choose instance types with ephemeral SSD locally attached disks and attach an SSD cache device to each storage pool To ensure their availability attach local SSD ephemeral disks to the SoftNAS instance when you create the instance Many instance types provide instance store or “ephemeral” volumes Although SoftNAS doesn ’t support the use of these volumes for dataset storage you can use them as a read cache for storage pools These volumes are located physically inside the underlying host of the instance and are not affected by performance variability from network overhead Although this varies by instance type most instancestore volumes (especially on newer instance types) are SSD volumes However stopping and starting an instance can move it to another underlying host which causes all data on these volumes to be lost This isn’t an issue for caching but is detrimental for dataset storage ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 9 If you require additional write caching or IOPS you can attach SSD backed Amazon EBS volumes to a storage pool The use of locally attached ephemeral disks for write cache isn ’t recommended Consider your workload requirements and priorities If the amount of storage and cost take priority over speed magnetic EBS volumes might be the right choice General Purpose SSD or Provisioned IOPS volumes offer the best mix of price performance and total storage space With AWS and SoftNAS Cloud you can add more storage or configure a different type of storage on the fly to address a variety of price or performance needs Using Amazon S3 with SoftNAS Cloud SoftNAS Cloud provides support for a feature known as SoftNAS S3 Cloud Disks These are abstractions of Amazon S3 storage presented as block devices By leveraging Amazon S3 storage SoftNAS Cloud can scale cloud storage to practically unlimited capacity You can provision each cloud disk to hold up to four petabytes (PB) of data If a larger data store is required you can use RAID to aggregate multiple cloud disks Each SoftNAS S3 Cloud Disk occupies a single Amazon S3 bucket in AWS The administrator chooses the AWS Region in which to create the S3 bucket and cloud disk For best performance choose the same r egion for both the SoftNAS Cloud EC2 instance and its S3 buckets SoftNAS Cloud storage pools and volumes using cloud disks have the full enterprisegrade NAS features (for example deduplication compression caching storage snapshots and so on) available and can be readily published for shared access through NFS CIFS AFP and iSCSI When you use a cloud disk use a block device local to the SoftNAS Cloud virtual appliance as a read cache to reduce Amazon S3 I/O charges and improve IOPS and performance for readintensive workloads For best S3 cloud disk performan ce and security specify an S3 endpoint within the VPC in which you deploy SoftNAS Cloud The S3 endpoint ensures S3 traffic is optimally routed through the VPC and not across the NAT gateway or Internet which is slower and less secure ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 10 You can also encrypt S3 cloud disks to protect all Amazon S3 I/O should it need to travel over the Internet or outside a VPC (eg from on premises or across regions ) Network Security Amazon VPC is a logically separated section of the AWS Cloud that provides you with com plete control over the networking configuration This includes the provisioning of an IP space subnet size and scope access control lists and route tables You can configure subnets inside an Amazon VPC as either public or private The difference between public and private subnets is that a public subnet has a direct route to the Internet; a private one does not When you configure an Amazon VPC for use with SoftNAS Cloud consider the level of access that your use case requires If the SoftNAS Cloud vir tual appliance does n’t need to be accessed from the Internet consider placing it in private Amazon VPC subnets To leverage SoftNAS S3 Cloud Disks the SoftNAS Cloud virtual appliance must have a way to access the S3 bucket either through the Internet or a configured VPC endpoint A VPC Security Group acts as a virtual firewall for your instance to control inbound and outbound traffic For each Security Group you add rules that control the inbound traffic to instances and a separate set of rules that control the outbound traffic Open only those ports that are required for the operation of your application Restrict access to specific remote subnets or hosts For a SoftNAS Cloud installation determine which ports must be opened to allow access to required services These ports can be divided in to three categories: management file services and high availability Open the following ports to manage SoftNAS Cloud through the SoftNAS StorageCenter and SSH As the following table indicates you should limit the source to hosts and subnets where management clients are located ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 11 Management Type Protocol Port Source SSH TCP 22 Management HTTPS TCP 443 Management When providing file services first determine which services you will provide The following tables show which ports to open for security group configuration As the tables indicate the source should be limited to the clients and subnets that consume these services AFP Type Protoco l Port Source Custom TCP Rule TCP 548 Clients Custom TCP Rule TCP 427 Clients NFS Type Protocol Port Source Custom TCP Rule TCP 111 Clients Custom TCP Rule TCP 2010 Clients Custom TCP Rule TCP 2011 Clients Custom TCP Rule TCP 2013 Clients Custom TCP Rule TCP 2014 Clients Custom TCP Rule TCP 2049 Clients Custom UDP Rule UDP 111 Clients ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 12 Custom UDP Rule UDP 2010 Clients Custom UDP Rule UDP 2011 Clients Custom UDP Rule UDP 2013 Clients Custom UDP Rule UDP 2014 Clients Custom UDP Rule UDP 2049 Clients CIFS/SMB Type Protocol Port Source Custom TCP Rule TCP 137 Clients Custom TCP Rule TCP 138 Clients Custom TCP Rule TCP 139 Clients Custom UDP Rule UDP 137 Clients Custom UDP Rule UDP 138 Clients Custom UDP Rule UDP 139 Clients Custom TCP Rule TCP 445 Clients Custom TCP Rule TCP 135 Clients Active Directory Integration Type Protocol Port Source LDAP TCP 389 Clients ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 13 iSCSI Type Protocol Port Source Custom TCP Rule TCP 3260 Client IPs The following security group configuration is required when you deploy SoftNAS SNAP HA which is discussed later in this whitepaper As the table indicates you should limit the source to the IP addresses of the SoftNAS Cloud virtual appliance High Availability with SNAP HA™ Type Protocol Port Source Custom ICMP Rule Echo Reply 22 SoftNAS Cloud IPs or Security Group ID* Custom ICMP Rule Echo Request 443 SoftNAS Cloud IPs or Security Group ID* * http://docsawsamazoncom/AWSEC2/latest/UserGuide/usingnetwork securityhtml Data Protection Considerations Creating a comprehensive strategy for backing up and restoring data is complex In some industries you must consider regulatory requirements for data security privacy and records retention SoftNAS Cloud provides multiple capabilities for data redundancy Always have one or more independent data backups beyond the data redundancy provided by SoftNAS Cloud You can back up data disks using EBS snapshots and thirdparty backup tools to create offsite or other backup copies to protect data SoftNAS Cloud provides multiple levels of data protection and redundancy but it isn’t intended to replace normal data backup processes Instead these levels of redundancy and data protection reduce risks associated with data loss or data ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 14 integrity and provide features that enable rapid recovery often without the need to restore from a backup copy SoftNAS Cloud is CopyOn Write (COW ) File Syst em SoftNAS Cloud leverages the reliable mature ZFS ZFS is a copy onwrite file System which means that existing data is never directly overwritten Instead new data blocks are appended to each file conceptually similar to a tape Figure 2 depicts how the file System inside SoftNAS Cloud maintains multiple versions known as storage snapshots without overwriting the existing data Figure 2: Copy onwrite file system Automatic Error Detection and Correction SoftNAS Cloud automatically detects and corrects unforeseeable data errors These errors can occur over time for many different reasons including bad sectors network or other I/O errors SoftNAS Cloud also provides protection against potential “bit rot” disk media deterioration over time caused by magnetism decay cosmic ray effects and other sporadic issues that can cause data storage or retrieval errors Proven ZFS data integrity measures are implemented by SoftNAS Cloud to detect errors repair them automatically and ensure data integrity is maintained Each read is validated against a 256bit checksum code When ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 15 errors are detected the system automatically repairs the block with the corrected data transparently so applications aren’t affected and data integrity is maintained Periodically administrators can “ scrub ” storage pools to provide even higher levels of data integrity SoftNAS Cloud Snapshots SoftNAS Cloud snapshots are volumebased point intime copies of data StorageCenter provides a rich set of snapshot scheduling and ondemand capabilities As Figure 3 shows snapshots provide file system versioning Figure 3: SoftNAS Cloud volumebased snapshots SoftNAS Cloud snapshots are integrated with Windows Previous Versions which is provided through the Microsoft Volume Shadow Copy Service (VSS ) API This feature is accessible to Windows operating system users through the Previous Versions tab so IT administrators don’t need to assist in file recovery Microsoft server and desktop operating system users can use scheduled snapshots to recover deleted files view or restore a version of a file that was overwritten and compare file versions side by side Operating systems that are supported include Windows 7 Windows 8 Windows Server 2008 and Windows Server 2012 Snapshots consume storage pool capacity so you must monitor usage for over consumption Storage snapshots grow incrementally as file system data is modified over a period of time SoftNAS Cloud automatically manages snapshots based on snapshot policies to prevent snapshots from consuming all available space Several snapshot policies are provided as a starting point and you can also create custom snapshot policies Snapshot policies are independent ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 16 of each volume so when a snapshot policy is changed it’s applied across all volumes that reference that policy When allocating storage pool space and choosing snapshot policies be sure to plan for enough additional storage to hold the snapshot data for the retention period you require SoftNAS SnapClones™ SnapClones provide read/write clones of SoftNAS Cloud snapshots They’re created instantaneously because of the spaceefficient copy onwrite model Initially SnapClones take up no capacity and grow only when writes are made to the SnapClone as shown in Figure 4 You can create any number of SnapClones from a storage snapshot Figure 4: SoftNAS SnapClones You can mount SnapClones as external NFS or CIFS shares They’re good for manipulating copies of data that are too large or complex to be practically copied For example testing new application versions against real data and selective recovery of files and folders using the native file browsers of the client operating system You can create a SnapClone instantly even for very large datasets in the tens to hundreds of TBs ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 17 Amazon EBS Snapshots SoftNAS Cloud has a builtin capability to leverage Amazon EBS point intime snapshots to back up EBS based storage pools The Amazon EBS snapshot copies the entire SoftNAS Cloud storage pool for backup and recovery purposes Advantages include the ability to use the AWS Management Console to manage the snapshots Capacity for the Amazon EBS snapshots isn’t counted against the storage pool capacity You can use Amazon EBS snapshots for longerterm data retention Deployment Scenarios The design of your SoftNAS Cloud installation on Amazon EC2 depends on the amount of usable storage and your requirements for IOPS and availability HighAvailability Architecture To realize high availability for storage infrastructure on AWS SoftNAS strongly recommends implementing SNAP HA in a highavailability configuration The SNAP HA functionality in SoftNAS Cloud provides high availability automatic and seamless failover across Availability Zones SNAP HA leverag es secure blocklevel replication provided by SoftNAS SnapReplicate to provide a secondary copy of data to a controller in another Availability Zone SNAP HA also provides both automatic and manual failover High availability and crosszone replication eliminates or minimizes downtime It is not however intended to replace regular data backups which are always required to fully protect important data especially in disaster recovery scenarios There are two methods for achieving high availability across zones: Elastic IP (EIP) addresses and SoftNAS Cloud Private Virtual IPbased HA The use of Private Virtual IPbased HA is recommended for best security performance and lowest cost All NAS traffic remains inside the VPC ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 18 Support for EIP is available for situations that require a “routable” IP address or the rare cases where data shares must be made available over the Internet Of course access via EIP addresses can be locked down using Security Groups Figure 5: Task creation and result aggregation MultiAZ HA operates within a VPC Optionally you can route NAS traffic through a floating EIP combined with SoftNAS patent ed9 HA technology That is NFS CIFS AFP and iSCSI traffic are routed to a primary SoftNAS controller in one Availability Zone and a secondary controller operates in a different Availability Zone NAS clients can be located in any Availability Zone SnapReplicate performs block replication from the primary controller A to the backup controller B keeping the secondary updated with the latest changed data blocks once per minute In the event of a failure in Availability Zone 1 (shown in Figure 5) the Elastic HA ™ IP address automatically fails over to controller B in Availability Zone 2 in less than 30 seconds Upon failover all NFS CIFS AFP and iSCSI sessions reconnect with no impact on NAS clients (that is no stale file handles and no need to restart) ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 19 HA with Private Virtual IP Addresses The patent ed9 Virtual IPbased HA technology in SoftNAS Cloud enables you to deploy two SoftNAS Cloud instances across different Availability Zones inside the private subnet of a VPC Then you can configure the SoftNAS Cloud instances with private IP addresses which are completely isolated from the Internet This allows for more flexible deployment options and greater control over access to the appliance In addition using private IP addresses enables faster failover because waiting for an EIP to switch instances isn ’t required Further Virtual IP HA is less costly because there is no I/O flowing across an EIP Instead all traffic remains completely within the VPC For most use cases MultiAZ HA using private virtual IP addresses is the recommended method Failover usually takes place in 15 to 20 seconds from the time a failure is detected SoftNAS Cloud uses patent ed9 techniques that allow NAS clients to stay connected via NFS CIFS iSCSI and AFP in case of a failover ensuring that services are not interrupted and continue to operate without downtime ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 20 Figure 6: Crosszone HA with virtual private IP addresses For more information about implementation and HA design best practices see the SoftNAS High Availability Guide 10 Single Controller Architecture In scenarios where you don’t r equire high availability you can deploy a single controller Figure 7 shows a basic SoftNAS Cloud instance running within a VPC ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 21 Figure 7: Basic SoftNAS Cloud instance running within a VPC In these scenarios you can combine EBS volumes into a RAID 10 ar ray for the storage pool to provide usable storage space with no drive failure redundancy You can also configure storage pools using a SoftNAS S3 Cloud Disk for RAID 0 (striping) for improved performance and IOPS These examples are for illustration purposes only Typically RAID 0 is sufficient as the underlying EBS and S3 storage devices already provide redundancy Volumes are provisioned from the storage pools and then shared through NFS CIFS/SMB AFP or iSCSI Hybrid Cloud Architecture You can deploy SoftNAS Cloud in a Hybrid Cloud architecture in which a SoftNAS Cloud virtual appliance is installed both in Amazon EC2 and on premises This architecture enables replication of data from on premises to Amazon EC2 and vice versa providing synchronized data access to users and ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 22 applications Hybrid Cloud architectures are also useful for backup and disaster recovery scenarios in which AWS can be used as an offsite backup location Replication You can deploy SoftNAS Cloud in Amazon EC2 as a replication target using SnapReplicate This enables scenarios such as data replicas disaster recovery and development environments by copying onsite production data into Amazon EC2 as shown in Figure 8 Figure 8: Hybrid Cloud backup and disaster recovery File Gateway to Amazon S3 You can deploy SoftNAS Cloud in file gateway use cases where SoftNAS Cloud operates on premises deployed in local data centers on popular hypervisors such as VMware vSphere SoftNAS Cloud connects to Amazon S3 storage treating Amazon S3 as a disk device The Amazon S3 disk device is added to a storage pool where volumes can export CIFS NFS AFP and iSCSI Amazon S3 is cached with block disk devices for read and write I/O Write I/O is cached at primary storage speeds and then flushed to Amazon S3 at the speed of the WAN When using Amazon S3based volumes with backup software the write cache dramatically shortens the backup window ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 23 Figure 9: SoftNAS Cloud Automation Options This section describes how the SoftNAS Cloud REST API CLI and AWS CloudFormation can be used for automation API and CLI SoftNAS Cloud provides a secure REST API and CLI The REST API provides access to the same storage administration capabilities from any programming language using HTTPS and REST verb commands returning JSONformatted response strings The CLI provides command line access to the API set for quick and easy storage administration Both methods are available for programmatic storage administration by DevOps teams who want to design storage into automated processes For more information see the SoftNAS API and CLI Guide 11 AWS CloudFormation The AWS CloudFormation service enables developers and businesses to create a collection of related AWS resources and provision them in an orderly and predictable way12 SoftNAS Cloud provides sample CloudFormation templates that you can use for automation You can find these templates here and in the Further Reading section of this paper When you work with CloudFormation templates pay ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 24 careful attention to the Instance Type Mappings and User Data sections which are shown in the following examples List all the instance types that you want to appear Edit this section with the latest instance types available Map to the appropriate AMIs here (SoftNAS regularly updates AMIs so this section must be updated accordingly ) This section is used to pass variables to the SoftNAS Cloud CLI for additional configuration ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 25 Conclusion SoftNAS Cloud is a popular NAS option on the AWS Cloud computing platform By following the implementation considerations and best practices highlighted in this paper you will maximize the performance durability and security of your SoftNAS Cloud implementation on AWS For more information about SoftNAS Clo ud see wwwsoftnascom Get a free 30day trial of SoftNAS Cloud now13 Contributors The following individuals and organizations contributed to this document:  Eric Olson VP Development SoftNAS  Kevin Brown Solutions Architect SoftNAS ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 26  Brandon Chavis Solutions Architect Amazon Web Services  Juan Villa Solutions Architect Amazon Web Services  Ian Scofield Solutions Architect Amazon Web Services Further Reading SoftNAS References SoftNAS Cloud Installation Guide SoftNAS Reference Guide SoftNAS Cloud High Availability Guide SoftNAS Cloud API and Cloud Guide AWS CloudFormation Templates for HVM Amazon Web Services References Amazon Elastic Block Store Amazon EC2 Instances AWS Security Best Practices Amazon Virtual Private Cloud Documentation Amazon EC2 SLA 1 http://awsamazoncom/ec2/ 2 http://wwwsoftnascom/ Notes ArchivedAmazon Web Services – SoftNAS Architecture on AWS Page 27 3 http://awsamazoncom/s3/ 4 http://awsamazoncom/vpc/ 5 http://awsamazoncom/iam/ 6 http://awsamazoncom/ebs/ 7 http://awsamazoncom/cloudwatch/ 8 http://docsawsamazoncom/AWSEC2/latest/UserGuide/EBSOptimizedhtm l#ebsoptimizationsupport 9 US Pat Nos 9378262; 9584363 Other patents pending 10 https://wwwsoftnascom/docs/softnas/v3/snaphahtml/indexhtm 11 https://wwwsoftnascom/docs/softnas/v3/apihtml/ 12 http://awsamazoncom/cloudformation/ 13 http://softnascom/trynow?utm_source=aws&amp;utm_medium=white paper&amp;utm_campaign=aws wp2017
General
AWS_Cloud_Adoption_Framework_Security_Perspective
Archived AWS Cloud Adoption Framewo rk Security Perspective June 2016 This paper has been archived For the latest content about the AWS Cloud Adoption Framework see the AWS Cloud Adoption Framework page: https://awsamazoncom/professionalservices/CAFArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 2 of 34 © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 3 of 34 Contents Abstract 4 Introduction 4 Security Benefits of AWS 6 Designed for Security 6 Highly Automated 6 Highly Available 7 Highly Accredited 7 Directive Component 8 Considerations 10 Preventive Component 11 Considerations 12 Detective Component 13 Considerations 14 Responsive Component 15 Considerations 16 Taking the Journey – Defining a Strategy 17 Considerations 19 Taking the Journey – Delivering a Program 20 The Core Five 21 Augmenting the Core 22 Example Sprint Series 25 Considerations 27 Taking the Journey – Develop Robust Security Operations 28 Conclusion 29 Appendix A: Tracking Progress Across the AWS CAF Security Perspective 30 ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 4 of 34 Key Security Enablers 30 Security Epics Progress Model 31 CAF Taxonomy and Terms 33 Notes 34 Abstract The Amazon Web Services (AWS) Cloud Adoption Framework1 (CAF) provides guidance for coordinating the different parts of organizations migrating to cloud computing The CAF guidance is broken into a number of areas of focus relevant to implementing cloudbased IT systems These focus areas are called perspectives and each perspective is further separated into components There is a whitepaper for each of the seven CAF perspectives This whitepaper covers the Security Perspective which focuses on incorporating guidance and process for your existing security controls specific to AWS usage in your environment Introduction Security at AWS is job zero All AWS customers benefit from a data center and network architecture built to satisfy the requirements of the most security sensitive organizations AWS and its partners offer hundreds of tools and features to help you meet your security objectives around visibility auditability controllability and agility This means that you can have the security you need but without the capital outlay and with much lower operational overhead Figure 1: AWS CAF Security Perspective ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 5 of 34 than in an onpremises environment The Security Perspective goal is to help you structure your selection and implementation of controls that are right for your organization As Figure 1 illustrates the components of the Security P erspective organize the principles that will help drive the transformation of your organization’s security culture For each component this whitepaper discusses specific actions you can take and the means of measuring progress :  Directive controls establish the governance risk and compliance models the environment will operate within  Preventive controls protect your workloads and mitigate threats and vulnerabilities  Detective controls provide full visibility and transparency over the operation of your deployments in AWS  Responsive controls drive remediation of potential deviations from your security baselines Security in the cloud is familiar The increase in agility and the ability to perform actions faster at a larger scale and at a lower cost does not invalidate well established principles of information security After covering the four Security Perspective components this whitepaper shows you the steps you can take to on your journey to the cloud to ensure that your environment maintains a strong security footing:  Defin e a strategy for security in the cloud When you start your journey look at your organization al business objectives approach to risk management and the level of opportunity presented by the cloud  Deliver a security program for development and implementation of security privacy compliance and risk management capabilities The scope can initially appear vast so it is important to create a structure that allows your organization to holistically address security in the cloud Th e implementation should allow for iterative development so that capabilit ies mature as programs develop This allows the security component to be a catalyst to the rest of th e organization’s cloud adoption efforts ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 6 of 34  Develop robust security operations capabilities that continuously mature and improve The security journey continues over time We recommend that you intertwine operational rigor with the building of new capabilities so the constant iteration can bring continuous improvement Security Benefits of AWS Cloud security at AWS is the highest priority As an AWS customer you will benefit from a data center and network architecture built to meet the requiremen ts of the most securitysensitive organizations An advantage of the AWS cloud is that it allows customers to scale and innovate while maintaining a secure environment Customers pay only for the services they use meaning that you can have the security you need but without the upfront expenses and at a lower cost than in an onpremises environment This section discusses some of the security benefits of the AWS platform Designed for Security The AWS Cloud infrastructure is operated in AWS data centers and is designed to satisfy the requirements of our most securitysensitive customers The AWS infrastructure has been designed to provide high availability while putting strong safeguards in place for customer privacy All data is stored in highly secure AWS data centers Network firewalls built into Amazon VPC and web application firewall capabilities in AWS WAF let you create private networks and control access to your instances and applications When you deploy systems in the AWS Cloud AWS helps by sharing the security responsibilities with you AWS engineers the underlying infrastructure using secure design principles and customers can implement their own security architecture for workloads deployed in AWS Highly Automated At AWS we purposebuild security tools and we tailor them for our unique environment size and global requirements Building security tools from the ground up allows AWS to automate many of the routine tasks security experts normally spend time on This means AWS security experts can spend more time ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 7 of 34 focusing on measures to increase the security of your AWS Cloud environment Customers also automate security engineering and operations functions using a comprehensive set of APIs and tools Identity management network security and data protection and monitoring capabilities can be fully automated and delivered using popular software development methods you already have in place Customers take an automated approach to responding to security issues When you automate using the AWS services rather than having people monitoring your security position and reacting to an event your system can monitor review and initiate a response Highly Available AWS builds its data centers in multiple geographic Regions Within the Regions multiple Availability Zones exist to provide resiliency AWS designs data centers with excess bandwidth so that if a major disruption occurs there is sufficient capacity to loadbalance traffic and route it to the remaining sites minimizing the impact on our customers Customers also leverage this MultiRegion Multi AZ strategy to build highly resilient applications at a disruptively low cost to easily replicate and back up data and to deploy global security controls consistently across their business Highly Accredited AWS environments are continuously audited with certifications from accreditation bodies across the globe This means that segments of your compliance have already been completed For more information about the security regulations and standards with which AWS complies see the AWS Cloud Compliance2 web page To help you meet specific government industry and company security standards and regulations AWS provides certification reports that describe how the AWS Cloud infrastructure meets the requirements of an extensive list of global security standards You can obtain available compliance reports by contacting your AWS account representative Customers inherit many controls operated by AWS into their own compliance and certification programs lowering the cost to maintain and run security assurance efforts in addition to actually maintaining the controls themselves With a strong foundation in place you are free to optimize the security of your workloads for agility resilience and scale ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 8 of 34 The rest of this whitepaper introduces each of the components of the Security Perspective You can use these components to explore the security goals you need to be successful on your journey to the cloud Directive Component The Directive component of the AWS Security Perspective provides guidance on planning your security approach as you migrate to AWS The key to effective planning is to define the guidance you will provide to the people implementing and operating your security environment The information needs to provide enough direction to determine the controls needed and how they should be operated Initial areas to consider include:  Account Governance — Direct the organization to create a process and procedures for managing AWS accounts Areas to define include how account inventories will be collected and maintained which agreements and amendments are in place and what criteria to use for when to create an AWS account Develop a process to create accounts in a consistent manner ensuring that all initial settings are appropriate and that clear ownership is established  Account Ownership and contact information —Establish an appropriate governance model of AWS accounts used across your organization and plan how contact information is maintained for each account Consider creating AWS accounts tied to email distribution lists rather than to an individual ’s email address This allows a group of people to monitor and respond to information from AWS about your account activity Additionally this provides resilience when internal personnel change and it provides a means of assigning security accountability List your security team as a security point of contact to speed timesensitive communications  Control framework —Establish or apply an industry standard control framework and determine if you need modifications or additions in order to incorporate AWS services at expected security levels Perform a compliance mapping exercise to determine how compliance requirements and security controls will reflect AWS service usage  Control ownership —Review the AWS Shared Responsibility Model3 information on the AWS website to determine if control ownership ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 9 of 34 modifications should be made Review and update your responsibility assignment matrix (RACI chart) to include ownership of controls operating in the AWS environment  Data classification —Review current data classifications and determine how those classifications will be managed in the AWS environment and what controls will be appropriate  Change and asset management —Determine how change management asset management are to be performed in AWS Create a means to determine what assets exist what the systems are used for and how the systems will be managed securely This can be integrated with an existing configuration management database (CMDB) Consider creating a practice for naming and tagging that allows identification and management to occur to the securit y level required You can use this approach to define and track the metadata that enables identification and control  Data locality —Review criteria for where your data can reside to determine what controls will be needed to manage the configuration and usage of AWS services across Regions AWS customers choose the AWS Region(s) where their content will be hosted This allows customers with specific geographic requirements to establish environments in locations they choose Customers can replicate and back up content in more than one Region but AWS does not move customer content outside of the customer’s chosen Region(s)  Least privilege access — Establish an organizational security culture built on the principle of least privilege and strong authentication Implement protocols to protect access to sensitive credential and key material associated with every AWS account Set expectations on how authority will be delegated down through software engineers operations staff and other job functions involved in cloud adoption  Security operations playbook and r unbooks —Define your security patterns to create durable guardrails the organization can reference over time Implement the plays through automation as runbooks; document human in theloop interventions as appropriate ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 10 of 34 Considerations  Do create a tailored AWS shared responsibility model for your ecosystem  Do use strong authentication as part of a protection scheme for all actors in your account  Do promote a culture of security ownership for application teams  Do extend your data classification model to include services in AWS  Do integrate developer operations and security team objectives and job functions  Do consider creating a strategy for naming and tracking accounts used to manage services in AWS  Do centralize phone and email distribution lists so that teams can be monitored ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 11 of 34 Preventive Component The Preventive component of the AWS Security Perspective provides guidance for implementing security infrastructure with AWS and within your organization The key to implementing the right set of controls is enabling your security teams to gain the confidence and capability they need to build the automation and deployment skills necessary to protect the enterprise in the agile scalable environment that is AWS Use the Directive component to determine the controls and guidance that you will need and then use the Preventive component to determine how you will operate the controls effectively AWS regularly provides guidance on best practices for AWS service utilization and workload deployment patterns which can be used as control implementation references Visit the AWS Security Center blog and most recent AWS Summit and re:Invent conference Security Track videos Consider the following areas to determining what changes (if any) you need to make to your current security architectures and practices This will help you with a smooth and planned AWS adoption strategy  Identity and access —Integrate the use of AWS into the workforce lifecycle of the organization as well as into the sources of authentication and authorization Create finegrained policies and roles associated with appropriate users and groups Create guardrails that permit important changes through automation only and prevent unwanted changes or roll them back automatically These steps will reduc e human access to production systems and data  Infrastructure protection —Implement a security baseline including trust boundaries system security configuration and maintenance (eg harden and patch) and other appropriate policy enforcement points (eg security groups AWS WAF Amazon API Gateway) to meet the needs that you identified using the Directive component  Data protection —Utilize appropriate safeguards to protect data in transit and at rest Safeguards includ e finegrain ed access controls to objects creating and controlling the encryption keys used to encrypt your data ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 12 of 34 selecting appropriate encryption or tokenization methods integrity validation and appropriate retention of data Considerations  Do treat security as code allowing you to deploy and validate security infrastructure in a manner that allows you the scale and agility to protect the organization  Do create guardrails sensible defaults and offer templates and best practices as code  Do build security services that the organization can leverage for highly repetitive or particularly sensitive security functions  Do define actors and then storyboard their experience interacting with AWS services  Do use the AWS Trusted Advisor tool to continually assess your AWS security posture and consider an AWS Well Architected review  Do establish a minimal viable security baseline and continually iterate to raise the bar for the workloads you’re protecting ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 13 of 34 Detective Component The Detective component of the AWS CAF Security Perspective provides guidance for gaining visibility into your organization’s security posture A wealth of data and information can be gathered by using services like AWS CloudTrail servicespecific logs and API/CLI return values Ingesting these information sources into a scalable platform for managing and monitoring logs event management testing and inventory/audit will give you the transparency and operational agility you need to feel confident in the security of your operations  Logging and monitoring —AWS provides native logging as well as services that you can leverage to provide greater visibility near to real time for occurrences in the AWS environment You can use these tools to integrate into your existing logging and monitoring solutions Integrate the output of logging and monitoring sources deeply into the workflow of the IT organization for end toend resolution of securityrelated activity  Security testing —Test the AWS environment to ensure that defined security standards are met By testing to determine if your systems will respond as expected when certain events occur you will be better prepared for actual events Examples of security testing include vulnerability scanning penetration testing and error injection to prove standards are being met The goal is to determine if your control will respond as expected  Asset inventory —Knowing what workloads you have deployed and operational will allow you to monitor and ensure that the environment is operating at the security governance levels expected and demanded by the security standards  Change detection —Relying on a secure baseline of preventive controls also requires knowing when these controls change Implement measures to determine drift between secure configuration and current state ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 14 of 34 Considerations  Do determine what logging information for your AWS environment you want to capture monitor and analyze  Do determine how your existing security operations center (SOC) business capability will integrate AWS security monitoring and management into existing practices  Do continually conduct vulnerability scans and penetration tests in accordance with AWS procedures for doing so ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 15 of 34 Responsive Component The Responsive component of the AWS CAF Security Perspective provides guidance for the responsive portion of your organization’s security posture By incorporating your AWS environment into your existing security posture and then preparing and simulating actions that require response you will be better prepared to respond to incidents as they occur With automated incident response and recovery and the ability to mitigate portions of disaster recovery it is possible to shift the primary focus of the security team from response to performing forensics and root cause analysis Some things to consider as part of adapting your security posture include the following:  Incident response —During an incident containing the event and returning to a known good state are important elements of a response plan For instance automating aspects of those functions using AWS Config rules and AWS Lambda responder scripts gives you the ability to scale your response at Internet speeds Review current incident response processes and determine if and how automated response and recovery will become operational and managed for AWS assets The security operations center’s functions should be tightly integrated with the AWS APIs to be as responsive as possible This provides the security monitoring and management function for AWS Cloud adoption  Security incident response simulations —By simulating events you can validate that the controls and processes you have put in place react as expected Using this approach you can determine if you are effectively able to recover and respond to incidents when they occur  Forensics —In most cases your existing forensics tools will work in the AWS environment Forensic teams will benefit from the automated deployment of tools across regions and the ability to collect large volumes of data quickly with low friction using the same robust scalable services their business critical applications are built on such as Amazon Simple Storage Service (S3) Amazon Elastic Block Store (EBS) Amazon Kinesis Amazon DynamoDB Amazon Relational Database Service (RDS) Amazon RedShift and Amazon Elastic Compute Cloud (EC2) ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 16 of 34 Considerations  Do update your incident response processes to recognize the AWS environment  Do leverage services in AWS to forensically ready your deployments through automation and feature selection  Do automate response for robustness and scale  Do use services in AWS for data collection and analysis in support of an investigation  Do validate your incident response capability through simulations of security incident responses ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 17 of 34 Taking the Journey – Defining a Strategy Review your current security strategy to determine if portions of the strategy would benefit from change as part of a cloud adoption initiative Map your AWS cloud adoption strategy against the level of risk your business is willing to accept your approach to meeting regulatory and compliance objectives as well as your definitions for what needs to be protected and how it will be protected Table 1 provides an example of a security strategy that articulates a set of principles which are then mapped to specific initiatives and work streams Principle Example Action s Infrastructure as code Skill up security team in code and automation ; move to DevSecOps Design guardrails not gates Architect drives toward good behavior Use the cloud to protect the cloud Build operate and manage security tools in the cloud Stay current ; run secure Consume new security features; patch and replace frequently Reduce reliance on persistent access Establish role catalog; automate KMI via secrets service Total visibility Aggregate AWS logs and metadata with OS and app logs Deep insights Implement a s ecurity data warehouse with BI and analytics Scalable incident response (IR) Update IR and Forensics standard operating procedure ( SOP) for shared responsibility framework SelfHealing Automate correction and restoration to known good state Table 1: Example Security Strategy As your strategy evolves you will want to begin iterating on your thirdparty assurance frameworks and organizational security requirements and incorporating into a risk management framework that will guide your journey to AWS It is often an effective practice to evolve your compliance mapping as you ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 18 of 34 gain a better understanding of the needs of your workloads in the cloud and the security capabilities provided by AWS Another key element of your strategy is mapping out the shared responsibility model specific to your ecosystem In addition to the macro relationship you share with AWS you’ll want to explore internal organizational shared responsibilities as well as those you impart upon your partners Companies can break their shared responsibility model into three major areas: a control framework; a responsible accountable consulted informed model (RACI); and a risk register The control framework describes how the security aspects of the business are expected to work and what controls will be put in place to manage risk You can use the RACI to identify and assign a person with responsibility for controls in the framework Finally use a risk register to capture controls without proper ownership Prioritize residual risks that have been identified aligning their treatment with new work streams and initiatives put in place to resolve them As you map these shared responsibilities you can expect to find new opportunities to automate operations and improve workflow between critical actors in your security compliance and risk management community Figure 2 shows an example extended shared responsibility model Figure 2: Example Shared Responsibility Model ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 19 of 34 Considerations  Do create a tailored strategy that addresses your organization al approach to implementing security in the cloud  Do promote automation as an underlying theme for all your strategy  Do clearly articulate your approach to cloud first  Do promote agility and flexibility by defining guardrails  Do take strategy as a short exercise that defines your organization’s approach to information security in the cloud  Do iterate quickly while laying down what the strategy is Your aim is to have a set of guiding principles that will drive the core of the effort forward – strategy is not the end in itself Move quickly and be willing to adapt and evolve  Do define strategic principles which will impart the culture you want in security and which inform the design decisions you’ll make rather than a strategy which impl ies specific solutions ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 20 of 34 Taking the Journey – Delivering a Program With a strategy in place it is now time to put it into practice and initiate the implementation that will transform your security organization and secure the cloud journey Whil e you have a wide choice of options and features your implementation should not be not a protracted effort This process of designing and implementing how different capabilities will work together represents an opportunity to quickly gain familiarity and learn how to iterate your designs to best meet your requirements Learn from actual implementation early then adapt and evolve using small changes as you learn To help you with your implementation you can use the CAF Security Epics (See Figure 3) The Security Epics consist of groups of user stories (use cases and abuse cases) that you can work on during sprints Each of these epics has multiple iterations addressing increasingly complex requirements and layering in robustness Although we advise the use of agile the epics can also be treated as general work streams or topics that help in prioritizing and structuring delivery using any other framework A proposed structure consists of the following 10 security epics (Figure 4 ) to guide your implementation Figure 3: AWS CAF Security Epics ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 21 of 34 The Core Five The following five epics are the core control and capability categories that you should consider early on because they are fundamental to getting your journey started  IAM —AWS Identity and Access Management (IAM) forms the backbone of your AWS deployment In the cloud you must establish an account and be granted privileges before you can provision or orchestrate resources Typical automation stories may include entitlement mapping/grants/audit secret material management enforcing separation of duties and least privilege access just intime privilege management and reducing reliance on long term credentials  Logging and monitoring —AWS services provide a wealth of logging data to help you monitor your interactions with the platform The performance of AWS services based upon your configuration choices and the ability to ingest OS and application logs to create a common frame of reference Typical automation stories may include log aggregation thresholds/alarming/alerting enrichment search platform visualization stakeholder access and workflow and ticketing to initiate closedloop organizational response  Infrastructure security —When you treat infrastructure as code security infrastructure becomes a first tier workload that must also be deployed as code This approach will afford you the opportunity to programmatically configure AWS services and deploy security infrastructure from AWS Figure 4: AWS Ten Security Epics ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 22 of 34 Marketplace partners or solutions of your own design Typical automation stories may include creating custom templates to configure AWS services to meet your requirements implementing security architecture patterns and security operations plays as code crafting custom security solutions from AWS services using patch management strategies like blue/green deployments reducing exposed attack surface and validating the efficacy of deployments  Data protection —Safeguarding important data is a critical piece of building and operating information systems and AWS provides services and features giving you robust options to protect your data throughout its lifecycle Typical automation stories may include making workload placement decisions implementing a tagging schema constructing mechanisms to protect data in motion such as VPN and TLS/SSL connections (including AWS Certificate Manager) constructing mechanisms to protect data at rest through encryption at appropriate tiers in your infrastructure using AWS Key Management Service (AWS KMS) implementation/integration deploying AWS CloudHSM creating tokenization schemes and implementing and operating of AWS Marketplace Partner solutions  Incident response —Automating aspects of your incident management process improves reliability and increases the speed of your response and often creates and environment easier to assess in afteraction reviews Typical automation stories may include using AWS Lambda function “ responders ” that react to specific changes in the environment orchestrating auto scaling events isolating suspect system components deploying just intime investigative tools and creating workflow and ticketing to terminate and learn from a closed loop organizational response Augmenting the Core These five epics represent the themes that will drive continued operational excellence through availability automation and audit You’ll want to judiciously integrate these epics into each sprint When additional focus is required you may consider treating them as their own epics  Resilience —High availability continuity of operations robustness and resilience and disaster recovery are often reasons for cloud deployments with AWS Typical automation stories may include using Multi AZ and Multi Region deployments changing the available attack surface scaling and ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 23 of 34 shifting allocation of resources to absorb attacks safeguarding exposed resources and deliberately inducing resource failure to validate continuity of system operations  Compliance validation —Incorporating compliance end toend into your security program prevents compliance from being reduced to a checkbox exercise or an overlay that occurs post deployment This epic provides the platform that consolidates and rationalizes the compliance artifacts generated through the other epics Typical automation stories may include creating security unit tests mapped to compliance requirements designing services and workloads to support compliance evidence collection creating compliance notification and visualization pipelines from evidentiary features monitoring continuous ly and creating compliancetoolingoriented DevSecOps teams  Secure CI/CD (DevSecOps) —Having confidence in your software supply chain through the use of trusted and validated continuous integration and continuous deployment tool chains is a targeted way to mature security operations practices as you migrate to the cloud Typical automation stories may include hardening and patching the tool chain least privilege access to the tool chain logging and monitoring of the production process security integration/deployment visualization and code integrity checking  Configuration and vulnerability analysis —Configuration and vulnerability analysis gain big benefit from the scale agility and automation afforded by AWS Typical automation stories may include enabling AWS Config and creating customer AWS Config Rules using Amazon CloudWatch Events and AWS Lambda to react to change detection implementing Amazon Inspector selecting and deploying continuous monitoring solutions from the AWS Marketplace deploying triggered scans and embedding assessment tools into the CI/CD tool chains  Security big data and predictive analytics —Security operations benefit from big data services and solutions just like any other aspect of the business Leveraging big data gives you deeper insights in a more timely fashion thus enhancing your agility and ability to iterate on your security posture at scale Typical automation stories may include creating security data lakes developing analytics pipelines creating visualization to drive security decision making and establishing feedback mechanisms for autonomic response ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 24 of 34 After this structure is defined an implementation plan can be crafted Capabilities change over time and opportunities for improvement will be continually identified As a reminder the themes or capability categories above can be treated as epics in an agile methodology which contain a range of user stories including both use cases and abuse cases Multiple sprints will lead to increased maturity while retain ing flexibility to adapt to business pace and demand ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 25 of 34 Example Sprint Series Consider organizing a sample set of six twoweek sprints (a group of epics driven over a twelveweek calendar quarter) including a short prep period in the following way Your approach will depend on resource availability priority and level of maturity desired in each capability as you move towards your minimal ly viable production capability ( MVP )  Sprint 0 —Security cartography: compliance mapping policy mapping initial threat model review establish risk registry; Build a backlog of use and abuse cases ; plan the security epics  Sprint 1 —IAM; logging and monitoring  Sprint 2 —IAM; logging and monitoring; infrastructure protection  Sprint 3 —IAM; logging and monitoring; infrastructure protection  Sprint 4 —IAM; logging and monitoring; infrastructure protection; data protection  Spring 5 —Data protection automating security operations incident response planning/tooling; r esilience  Sprint 6 —Automating security operations incident r esponse; r esilience A key element of compliance validation is incorporating the validation into each sprint through security and compliance unit test cases and then undergoing the promotion to production process When explicit compliance validation capability is required sprints can be established to focus specifically on those user stories Over time iteration can be leveraged to achieve continuous validation and implementation of autocorrection of deviation where appropriate The overall approach aims to clearly define what an MVP or baseline is which will then map to first sprint in each area In the initial stages the end goal can be less defined but a clear roadmap of initial sprints is created Timing experience and iteration will allow refining and adjusting the end state to be just right for your organization In reality the final state may continuously shift but ultimately the process does lead to continuous improvement at a faster pace This approach can be more effective and have greater cost efficiency than a big bang approach based on long timelines and high capital outlays ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 26 of 34 Diving a little deeper the first sprint for IAM can consist of defining the account structure and implementing the core set of best practices A second sprint can implement federation A third sprint can expand account management to cater for multiple accounts and so on IAM user stories that may span one or more of these initial sprints could include stories such as the following: “As an access administrator I want to create an initial set of users for managing privileged access and federation identity provider trust relationships ” “As an access administrator I want to map users in my existing corporate directory to functional roles or sets of access entitlements on the AWS platform” “As an access administrator I want to enforce multi factor authentication on all interaction with the AWS console by interactive users” In this example the following logging and monitoring user stories may span one or more initial sprints: “As a security operations analyst I want to receive platform level logging for all AWS Regions and AWS Accounts” “As a security operations analyst I want all platform level logs delivered to one shared location from all AWS Regions and accounts” “As a security operations analyst I want to receive alerts for any operation that attaches IAM policies to users groups or roles” You can build capability in parallel or serial fashion and maintain flexibility by including security capability user stories in the overall product backlog You can also split the user stories out into a securityfocused DevOps team These are decisions you can periodically revisit allowing you to tailor your delivery to the needs of the organization over time ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 27 of 34 Considerations  Do review your existing control framework to determine how AWS services will be operated to meet your required security standards  Do define actors and then storyboard their experience interacting with AWS services  Do define what the first sprint is and what the initial highlevel longer term goal will be  Do establish a minimal ly viable security baseline and continually iterate to raise the bar for the workloads and data you’re prot ecting ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 28 of 34 Taking the Journey – Develop Robust Security Operations In an environment where infrastructure is code security must also be treated as code The Security Operations component provides a means to communicate and operationalize the fundamental tenets of security as code:  Use the cloud to protect the cloud  Security infrastructure should be cloudaware  Expose security features as services using the API  Automate everything so that your security and compliance can scale To make this governance model practical lines of business often organize as DevOps teams to build and deploy infrastructure and business software You can extend the core tenets of the governance model by integrating security into your DevOps culture or practice; which is sometimes called DevSecOps Build a team around the following principles:  The security team embraces DevOps cultures and behaviors  Developers contribute openly to code used to automate security operations  The security operations team is empowered to participate in testing and automation of application code  The team takes pride in how fast and frequently they deploy Deploying more frequently with smaller changes reduces operational risk and shows rapid progress against the security strategy Integrated development security and operations teams have three shared key missions  Harden the continuous integration/ continuous deployment tool chain  Enable and promote the development of resilient software as it traverses the tool chain  Deploy all security infrastructure and software through the tool chain ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 29 of 34 Determining the changes (if any) to current security practices will help you plan a smooth AWS adoption strategy Conclusion As you embark on your AWS adoption journey you will want to update your security posture to include the AWS portion of your environment This Security Perspective whitepaper prescriptively guides you on an approach for taking advantage of the benefits that operating on AWS has for your security posture Much more security information is available on the AWS website where security features are described in detail and more detailed prescriptive guidance is provided for common implementations There is also a comprehensive list of security focused content4 that should be reviewed by various members of your security team as you prepare for AWS adoption initiatives ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 30 of 34 Appendix A: Tracking Progress Across t he AWS CAF Security Perspective You can use the key security enablers and the security epics progress model discussed in this appendix to measure the progress and the maturity of your implementation of the AWS CAF Security Perspective The enablers and the progress model can be used for project planning purposes to evaluate the robustness of implementations or simply as a means to drive conversation about the road ahead Key Security Enablers Key security enablers are milestones that help you stay on track We use a scoring model that consists of three values: Unaddressed Engaged and Completed  Cloud Security Strategy [Unaddressed Engaged Completed]  Stakeholder Communication Plan [Unaddressed Engaged Completed]  Security Cartography [Unaddressed Engaged Completed]  Document Shared Responsibility Model [Unaddressed Engaged Completed]  Security Operations Playbook & Runbooks [Unaddressed Engaged Completed]  Security Epics Plan [Unaddressed Engaged Completed]  Security Incident Response Simulation [Unaddressed Engaged Completed] ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 31 of 34 Security Epics Progress Model The security epics progress model helps you evaluate your progress in implementing the 10 Security Epics described in this paper We use a scoring model of 0 (zero) through 3 to measure robustness We provided examples for the Identity and Access Management and the Logging and Monitoring epics so you could see how this progression works Core 5 Security Epics 0 Not addressed 1 Addressed in architecture and plans 2 Minimal viable implementation 3 Enterprise ready production implementation ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 32 of 34 Security Epic 0 1 2 3 Identity and Access Management Example: No relationship between on premise s and AWS identities Example: An approach is defined for workforce lifecycle identity management IAM architecture is documented Job functions are mapped to IAM policy needs Example: Implemented IAM as defined in architecture IAM policies implemented that map to some job functions IAM implementation validated Example: Automation of IAM lifecycle workflow s Logging and Monitoring Example: No utilization of AWS provided logging and monitoring solutions Example: An approach is defined for log aggregation monitoring and integration into security event management processes Example: Platform level and service level logging is enab led and centralized Example: Events with security implications are deeply integrated into security workflow and incident management processes and systems Infrastructure Security Data Protection Incident Management ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 33 of 34 Augmenting the Core 5 0 Not addressed 1 Addressed in architecture and plans 2 Minimal viable implementation 3 Enterprise ready production implementation Security Epic 0 1 2 3 Resilience DevSecOps Compliance Validation Configuration & Vulnerability Management Security Big Data CAF Taxonomy and Terms The Cloud Adoption Framework (CAF) is the framework AWS created to capture guidance and best practices from previous customer engagements An AWS CAF perspective represents an area of focus relevant to implementing cloudbased IT systems in organizations For example the Security Perspective provides guidance and process for evaluating and enhancing your existing security controls as you move to the AWS environment ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 34 of 34 Each CAF Perspective is made up of components and activities A component is a subarea of a perspective that represents a specific aspect that needs attention This whitepaper explores the components of the Security perspective An activity provides more prescriptive guidance for creating actionable plans that the organization can use to move to the cloud and to operate cloudbased solutions on an ongoing basis For example Directive is one component of the Security Perspective and tailoring an AWS shared responsibility model for your ecosystem may be an activity within that component When combined the Cloud Adoption Framework (CAF) and the Cloud Adoption Methodology (CAM) can be used as guidance during your journey to the AWS cloud Notes 1 https://d0awsstaticcom/whitepapers/aws_cloud_adoption_frameworkpdf 2 https://awsamazoncom/compliance/ 3 https://awsamazoncom/compliance/sharedresponsibilitymodel/ 4 https://awsamazoncom/security/securityresources/
General
Migrating_Your_Databases_to_Amazon_Aurora
This paper has been archived For the latest technical content refer t o the HTML version: https://docsawsamazoncom/whitepapers/latest/ migratingdatabasestoamazonaurora/migrating databasestoamazonaurorahtml Migratin g Your Databases to Amazon Aurora First Published June 10 2016 Updated July 28 2021 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Contents Introduction to Amazon Aurora 1 Database migration considerations 3 Migration phases 3 Application consid erations 3 Sharding and read replica considerations 4 Reliability considerations 5 Cost and licensing considerations 6 Other migration considerations 6 Planning your database migration process 7 Homogeneous migration 7 Heterogeneous migration 9 Migrating large databases to Amazon Aurora 10 Partition and shard consolidation on Amazon Aurora 11 Migration options at a glance 12 RDS snapshot migration 13 Migration using Aurora Read Replica 18 Migrating the database schema 21 Homogeneous schema migration 22 Heterogeneous schema migration 23 Schema migration using the AWS Schema Conversion Tool 24 Migrating data 32 Introduction and general approach to AWS DMS 32 Migration methods 33 Migration procedure 34 Testing and cutover 43 Migration testing 44 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Cutover 44 Conclusion 46 Contributors 46 Further reading 46 Document history 47 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Abstract Amazon Aurora is a MySQL and PostgreSQL compatible enterprise grade relational database engine Amazon Aurora is a cloud native database that overcomes many of the limitation s of traditional relational database engines The goal of this whitepaper is to highlight best practices of migrating your existing databases to Amazon Aurora It presents migration considerations and the step bystep process of migrating open source and c ommercial databases to Amazon Aurora with minimum disruption to the applications This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 1 Introduction to Amazon Aurora For decades traditional relational databases have been the primary choice for data storage and persistence These datab ase systems continue to rely on monolithic architectures and were not designed to take advantage of cloud infrastructure These monolithic architectures present many challenges particularly in areas such as cost flexibility and availability In order to address these challenges AWS redesigned relational database for the cloud infrastructure and introduced Amazon Aurora Amazon Aurora is a MySQL and PostgreSQL compatible relational database engine that combines the speed availability and security of high end commercial databases with the simplicity and cost effectiveness of open source databases Aurora provides up to five times better performance than MySQL three times better performance than PostgreSQL and comparable performance of high end commercial databases Amazon Aurora is priced at 1/10th the cost of commercial engines Amazon Aurora is available through the Amazon Relational Database Service (Amazo n RDS) platform Like other Amazon RDS databases Aurora is a fully managed database service With the Amazon RDS platform most database management tasks such as hardware provisioning software patching setup configuration monitoring and backup are co mpletely automated Amazon Aurora is built for mission critical workloads and is highly available by default An Aurora database cluster spans multiple Availability Zones in a Region providing out ofthebox durability and fault tolerance to your data acr oss physical data centers An Availability Zone is composed of one or more highly available data centers operated by Amazon Availability Zones are isolated from each other and are connected through lowlatency links Each segment of your database volume i s replicated six times across these Availability Zones Amazon Aurora enables dynamic resizing for database storage space Aurora cluster volumes automatically grow as the amount of data in your database increases with no performance or availability impac t—so there is no need for estimating and provisioning large amount of database storage ahead of time The storage space allocated to your Amazon Aurora database cluster will automatically increase up to a maximum size of 128 tebibytes (TiB) and will automa tically decrease when data is deleted Aurora's automated backup capability supports point intime recovery of your data enabling you to restore your database to any second during your retention period up to the last five minutes Automated backups are stored in Amazon Simple Storage Service This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Auro ra 2 (Amazon S3 ) which is designed for 99999999999% durability Amazon Aurora backups are automatic incremental and continuous and have no impact on database performance For applications that need read only replicas you can create up to 15 Aurora Replicas per Aurora database with very low replica lag These replicas share the same underlying storage as the source instance lowering costs and avoiding the need to perform writes at the replica nodes Optionally Aurora Global Database can be used for high read throughputs across six Regions up to 90 read replicas Amazon Aurora is highly secure and allows you to encrypt your databases using keys that you create and control through AWS Key Management Service ( AWS KMS) On a database instance running with Amazon Aurora encryption data stored at rest in the underlying storage is encrypted as are the automated backups snapshots and replicas in the same cluster Amazon Aurora uses SSL (AES 256) to secure data in tra nsit For a complete list of Aurora features see the Amazon Aurora product page Given the rich feature set and cost effectiveness of Amazon Aurora it is increasingly viewed as the go to database for mi ssion critical applications Amazon Aurora Serverless v2 (Preview) is the new version of Aurora Serverless an on demand auto matic scaling configuration of Amazon Aurora that automatically starts up shuts down and scales capacity up or down based on yo ur application's needs It scales instantly from hundreds to hundreds ofthousands of transactions in a fraction of a second As it scales it adjusts capacity in fine grained increments to provide just the right amount of database resources that the appli cation needs There is no database capacity for you to manage you pay only for the capacity your application consumes and you can save up to 90% of your database cost compared to the cost of provisioning capacity for peak Aurora Serverless v2 is a simpl e and cost effective option for any customer who cannot easily allocate capacity because they have variable and infrequent workloads or have a large number of databases If you can predict your application’s requirements and prefer the cost certainty of fi xedsize instances then you may want to continue using fixed size instances Amazon Aurora capabilities discussed in this whitepaper apply to both MySQL and PostgreSQL database engine s unless otherwise specified However the migration practices discusse d in this paper are specific to Aurora MySQL database engine For more information about Aurora best practices specific to PostgreSQL database engine see Working with Amazon Aurora PostgreSQL in the Amazon Aurora user guide This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 3 Database migration considerations A database represents a critical component in the architecture of most applications Migrating the database to a new platform is a significant event in an application’s lifecycle and may have an impact on application functionality performance and reliabi lity You should take a few important considerations into account before embarking on your first migration project to Amazon Aurora Migration phases Because database migrations tend to be complex we advocate taking a phased iterative approach Figure 1 — Migration phases Application considerations Evaluate Aurora features Although most applications can be architected to work with many relational database engines you should make sure that your application work s with Ama zon Aurora Amazon Aurora is designed to be wire compatible with MySQL 56 and 57 Therefore most of the code applications drivers and tools that are used today with MySQL databases can be used with Aurora with little or no change However certain My SQL features like the MyISAM storage engine are not available with Amazon Aurora Also due to the managed nature of the Aurora service SSH access to database nodes is restricted which may affect your ability to install thirdparty tools or plugins on the database host Performance considerations Database per formance is a key consideration when migrating a database to a new platform Therefore many successful database migration projects start with performance evaluations of the new database platform Although the Amazon Aurora Performance Assessment paper gives you a decent idea of overall database performance these benchmarks do not emulate the data access patterns of your This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 4 applications For more useful results test the database performance for time sensitive workloads by running your queries (or s ubset of your queries) on the new platform directly Consider these s trategies: • If your current database is MySQL migrate to Amazon Aurora with downtime and performance test your database with a test or staging version of your application or by replaying the production workload • If you are on a non MySQL compliant engine you can selectively copy the busiest tables to Amazon Aurora and test your queries for those tables This gives you a good starting point Of course testing after complete data migrati on will provide a full picture of real world performance of your application on the new platform Amazon Aurora delivers comparable performance with commercial engines and significant improvement over MySQL performance It does this by tightly integrating the database engine with an SSD based virtualized storage layer designed for database workloads This reduc es writes to the storage system minimiz es lock contention and eliminat es delays created by database process threads Our tests with SysBench on r 516xlarge instances show that Amazon Aurora delivers close to 800000 reads per second and 200 000 writes per second five times higher than MySQL running the same benchmark on the same hardware One area where Amazon Aurora significantly improves upon traditional MySQL is highly concurrent workloads In order to maximize your workload’s throughput on Amazon Aurora we recommend architecting your applications to drive a large number of concurrent queries Sharding and read replica considerations If your cu rrent database is sharded across multiple nodes you may have an opportunity to combine these shards into a single Aurora database during migration A single Amazon Aurora instance can scale up to 128 TB supports thousands of tables and supports a signif icantly higher number of reads and writes than a standard MySQL database If your application is read/write heavy consider using Aurora read replicas for offloading readonly workload from the primary database node Doing this can improve concurrency of your primary database for write s and will improve overall read and write This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 5 performance Using read replicas can also lower your costs in a Multi AZ configuration since you may be able to use smaller insta nces for your primary instance while adding failover capabilities in your database cluster Aurora read replicas offer near zero replication lag and you can create up to 15 read replicas Reliability considerations An important consideration with database s is high availability and disaster recovery Determine the RTO ( recovery time objective) and RPO ( recovery point objective) requirements of your application With Amazon Aurora you can significantly improve both these factors Amazon Aurora reduces data base restart times to less than 60 seconds in most database crash scenarios Aurora also moves the buffer cache out of the database process and makes it available immediately at restart time In rare scenarios of hardware and Availability Zone failures re covery is automatically handled by the database platform Aurora is designed to provide you zero RPO recovery within an AWS Region which is a major improvement over on premises database systems Aurora maintains six copies of your data across three Availa bility Zones and automatically attempts to recover your database in a healthy AZ with no data loss In the unlikely event that your data is unavailable within Amazon Aurora storage you can restore from a DB snapshot or perform a point intime restore oper ation to a new instance For cross Region DR Amazon Aurora also offers a global database feature designed for globally distributed transactions applications allowing a single Amazon Aurora database to span multiple AWS Regions Aurora uses storage base d replication to replicate your data to other Regions with typical latency of less than one second and without impacting database performance This enables fast local reads with low latency in each Region and provides disaster recovery from Region wide ou tages You can promote the secondary AWS Region for read write workloads in case of an outage or disaster in less than one minute You also have the option to create an Aurora Read Replica of an Aurora MySQL DB cluster in a different AWS Region by using MySQL binary log (binlog) replication Each cluster can have up to five Read Replicas created this way each in a different Region This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 6 Cost and licensing considerations Owning and running databases come with associated costs Before planning a database migration an analysis of the total cost of ownership (TCO) of the new database platform is imperative Migration to a new database platform should ideally lower the total cost of ownership while providing your applications with similar or better features If you are running an open source database engine (MySQL Postgres) your costs are largely related to hardware server management and database management activities However if you are runni ng a commer cial database engine (Oracle SQL Server DB2 and so on ) a significant portion of your cost is database licensing Since Aurora is available at one tenth of the cost of commercial engines many applications moving to Aurora are able to significantly reduce their TCO Even if you are running on an open source engine like MySQL or Postgres with Aurora’s high performance and dual purpose read replicas you can realize meaningful savings by moving to Amazon Aurora See th e Amazon Aurora Pricing page for more information Other migration considerations Once you have considered application suitability performance TCO and reliability factors you should think about what it would take to migrate to th e new platform Estimate code change effort It is important to estimate the amount of code and schema changes that you need to perform while migrating your database to Amazon Aurora When migrating from MySQL compatible databases negligible code changes are required However when migrating from non MySQL engines you may be required to make schema and code changes The AWS Schema Conversion Tool can help to estimate that effort (see the Schema migration using th e AWS Schema Conversion Tool section in this document) Application availability during migration You have options of migrating to Amazon Aurora by taking a predictable downtime approach with your application or by taking a near zero downtime approach The approach you choose depend s on the size of your database and the availability requirements of your applications Whatever the case it’s a good idea to consider the impact of the migration process on your application and business before start ing with a database migration The next few sections explain both approaches in detail This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 7 Modify connection string during migration You need a way to point the applications to your new database One option is to modify the connection strings for all of the applications Another common option is to use DNS In this case you don’t use the actual host name of your database instance in your connection string Instead consider creating a canonical name (CNAME) record that points to the host name of your database instance Doing this allows you to change the endpoint to which your application points in a single location rather than tracking and modifying multiple connection string settings If you choose to use this pattern be sure to pay close attention to the time to live (TTL) setting for your CNAME record If this value is set too high then the host name pointed to by this CNAME might be cached longer than desired If this value is set too low additional overhead might be placed on your c lient applications by having to resolve this CNAME repeatedly Though use cases differ a TTL of 5 seconds is usually a good place to start Planning your database migration process The previous section discussed some of the key considerations to take int o account while migrating databases to Amazon Aurora Once you have determined that Aurora is the right fit for your application the next step is to decide on a preliminary migration approach and create a database migration plan Homogen eous migration If your source database is a MySQL 56 or 57 compliant database (MySQL MariaDB Percona and so on ) then migration to Aurora is quite straightforward Homogen eous migration with downtime If your application can accommodate a predictable length of downtime during off peak hours migration with the downtime is the simplest option and is a highly recommended approach Most database migration projects fall into this category as most applications already have a well defined maintenance window You have the foll owing options to migrate your database with downtime • RDS snapshot migration − If your source database is running on Amazon RDS MySQL 56 or 57 you can simply migrate a snapshot of that database to Amazon Aurora For migrations with downtime you either have to stop your application or stop writing to the database while snapshot a nd migration is in progress The time to migrate primarily depends upon the size of the database This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 8 and can be determined ahead of the production migration by running a test migration Snapshot migration option is explained in the RDS Snapshot Migration section • Migration using native MySQL tools — You may use native MySQL tools to migrate your data and schema to Aurora This is a great option when you need more control over the database migration process you are mo re comfortable using native MySQL tools and other migration methods are not performing as well for your use case You can create a dump of your data using the mysqldump utility and then import that data into an existing Amazon Aurora MySQL DB cluster Fo r more information see Migrating from MySQL to Amazon Aurora by using mysqldump You can copy th e full and incremental backup files from your database to an Amazon S3 bucket and then restore an Amazon Aurora MySQL DB cluster from those files This option can be considerably faster than migrating data using mysqldump For more information see Migrating data from MySQL by using an Amazon S3 bucket • Migration using AWS Database Migration Service (AWS DM S) — Onetime migration using AWS DMS is another tool for moving your source database to Amazon Aurora Before you can use AWS DMS to move the data you need to copy the database schema from source to target using native MySQL tools For the step bystep p rocess see the Migrating Data section Using AWS DMS is a great option when you don’t have experience using native MySQL tools Homogen eous migration with nearzero downtime In some scenarios you might want to m igrate your database to Aurora with minimal downtime Here are two e xamples: • When your database is relatively large and the migration time using downtime options is longer than your application maintenance window • When you want to run source and target data bases in parallel for testing purposes In such cases you can replicate changes from your source MySQL database to Aurora in real time using replication You have a couple of options to choose from: • Near zero downtime migration using MySQL binlog replication — Amazon Aurora supports traditional MySQL binlog replication If you are running MySQL database chances are that you are already familiar with classic binlog replication setup If that’s the case and you want more control over the migration process This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 9 onetime database load using native tools coupled wi th binlog replication gives you a familiar migration path to Aurora • Near zero downtime migration using AWS Database Migration Service (AWS DMS) — In addition to supporting one time migration AWS DMS also supports real time data replication using change d ata capture (CDC) from source to target AWS DMS takes care of the complexities related to initial data copy setting up replication instances and monitoring replication After the initial database migration is complete the target database remains synchr onized with the source for as long as you choose If you are not familiar with binlog replication AWS DMS is the next best option for homogenous near zero downtime migrations to Amazon Aurora See the section Introduction and General Approach to AWS DMS • Near zero downtime migration using Aurora Read Replica — If your source database is running on Amazon RDS MySQL 56 or 57 you can migrate from a MySQL DB instance to an Aurora MySQL DB cluster by creating an A urora read replica of your source MySQL DB instance When the replica lag between the MySQL DB instance and the Aurora Read Replica is zero you can direct your client applications to the Aurora read replica This migration option is explained in the Migrate using Aurora Read Replica section Heterogeneous migration If you are looking to migrate a non MySQL compliant database (Oracle SQL Server PostgresSQL and so on ) to Amazon Aurora several options can help you accomplish this migration quickly and easily Schema migration Schema migration from a non MySQL compliant database to Amazon Aurora can be achieved using the AWS Schema Conversion Tool This tool is a desktop application that helps you convert your datab ase schema from an Oracle Microsoft SQL Server or PostgreSQL database to an Amazon RDS MySQL DB instance or an Amazon Aurora DB cluster In cases where the schema from your source database cannot be automatically and completely converted the AWS Schema Conversion Tool provides guidance on how you can create the equivalent schema in your target Amazon RDS database For details s ee the Migrating the Database Schema section This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 10 Data migration While supporting homo genous migrations with near zero downtime AWS Database Migration Service ( AWS DMS) also supports continuous replication across heterogeneous databases and is a preferred option to move your source database to your target database for both migrations with downtime and migrations with near zero downtime Once the migration has started AWS DMS manages all the complexities of the migration process like data type transformation compression and parallel transfer (for faster data transfer) while ensuring that data changes to the source database that occur during the migration process are automatically replicated to the target Besides using AWS DMS you can use various third party tools like Attunity Replicate Tungsten Replicator Oracle Golden Gate etc to migrate your data to Amazon Aurora Whatever tool you choose take performance and licensing costs into consideration before finalizing your toolset for migration Migrating large databases to A mazon Aurora Migration of large datasets presents unique challenges in every database migration project Many successful large database migration projects use a combination of the following strategies: • Migration with continuous replication — Large database s typically have extended downtime requirements while moving data from source to target To reduce the downtime you can first load baseline data from source to target and then enable replication (using MySQL native tools AWS DMS or third party tools) fo r changes to catch up • Copy static tables first — If your database relies on large static tables with reference data you may migrate these large tables to the target database before migrating your active dataset You can leverage AWS DMS to copy tables selectively or export and import these tables manually • Multiphase migration — Migration of large database with thousands of tables can be broken down into multiple phases For example you may move a set of tables with no cross joins queries every weekend until the source database is fully migrated to the target database Note that in order to achieve this you need to make changes in your application to connect to two databases simultaneously while your dataset is on two distinct nodes Although this is no t a common migration pattern this is an option nonetheless This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 11 • Database cleanup — Many large databases contain data and tables that remain unused In many cases developers and DBAs keep backup copies of tables in the same database or they just simply for get to drop unused tables Whatever the reason a database migration project provides an opportunity to clean up the existing database before the migration If some tables are not being used you might either drop them or archive them to another database You might also delete old data from large tables or archive that data to flat files Partition and shard consolidation on Amazon Aurora If you are running multiple shards or functional partitions of your database to achieve high performance you have an o pportunity to consolidate these partitions or shards on a single Aurora database A single Amazon Aurora instance can scale up to 128 TB supports thousands of tables and supports a significantly higher number of reads and writes than a standard MySQL dat abase Consolidating these partitions on a single Aurora instance not only reduce s the total cost of ownership and simplify database management but it also significantly improve s performance of cross partition queries • Functional partitions — Functional partitioning means dedicating different nodes to different tasks For example in an ecommerce application you might have one database node serving product catalog data and another database node capturing and processing orders As a result these partiti ons usually have distinct nonoverlapping schemas • Consolidation strategy — Migrate each functional partition as a distinct schema to your target Aurora instance If your source database is MySQL compliant use native MySQL tools to migrate the schema and then use AWS DMS to migrate the data either one time or continuously using replication If your source database is non MySQL complaint use AWS Schema Conversion Tool to migrate the schemas to Aurora and use AWS DMS for one time load or continuous replic ation • Data shards — If you have the same schema with distinct sets of data across multiple nodes you are leveraging database sharding For example a high traffic blogging service may shard user activity and data across multiple database shards while kee ping the same table schema • Consolidation strategy — Since all shards share the same database schema you only need to create the target schema once If you are using a MySQL compliant database use native tools to migrate the database schema to Aurora If you are using a non MySQL database use AWS Schema Conversion Tool to This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 12 migrate the database schema to Aurora Once the database schema has been migrated it is best to stop writes to the database shards and use native tools or an AWS DMS one time data load to migrate an individual shard to Aurora If writes to the application cannot be stopped for an extended period you might still use AWS DMS with replication but only after proper planning and testing Migration options at a glance Table 1 — Migration options Source database type Migration with downtime Near zero downtime migration Amazon RDS MySQL Option 1: RDS snapshot migration Option 2 : Manual migration using native tools* Option 3 : Schema migration using native tools and data load using AWS DMS Option 1 : Migration using native tools + bin log replication Option 2: Migrate using Aurora Read Replica Option 3 : Schema migration using native tools + AWS DMS for data movement MySQL Amazon EC2 or on premises Option 1 : Migration using native tools Option 2 : Schema migration with native tools + AWS DMS for data load Option 1 : Migration using native tools + binlog replication Option 2: Schema migration using native tools + AWS DMS to move data Oracle/SQL server Option 1: AWS Schema Conversion Tool + AWS DMS (recommended) Option 2: Manual or third party tool for schema conversion + manual or third party data load in target Option 1: AWS Schema Conversion Tool + AWS DMS (recommended) Option 2: Manual or third party tool for schema conversion + manual or third party data load in target + thirdparty tool for replication Other non MySQL databases Option: Manual or third party tool for schema conversion + manual or third party data load in target Option: Manual or third party tool for schema conversion + manual or third party data load in target + thirdparty tool for replication (GoldenGate etc) This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 13 *MySQL Native tools: mysqldump SELECT INTO OUTFILE third party tools like mydumper/myloader RDS snapshot migration To use RDS snapshot migration to move to Aurora your MySQL database must be running on Amazon RDS MySQL 56 or 57 and you must make an RDS snapshot of the database This migration method does not work with on premise s databases or databases ru nning on Amazon Elastic Compute Cloud (Amazon EC2) Also if you are running your Amazon RDS MySQL database on a version earlier than 56 you would need to upgrade it to 56 as a prerequisite The biggest advantage to this migration method is that it is t he simplest and requires the fewest number of steps In particular it migrate s over all schema objects secondary indexes and stored procedures along with all of the database data During snapshot migration without binlog replication your source databas e must either be offline or in a read only mode (so that no changes are being made to the source database during migration) To estimate downtime you can simply use the existing snapshot of your database to do a test migration If the migration time fits within your downtime requirements then this may be the best method for you Note that i n some cases migration using AWS DMS or native migration tools can be faster than using snapshot migration If you can’t tolerate extended downtime you can achieve n earzero downtime by creating an Aurora Read Replica from a source RDS MySQL This migration option is explained in Migrating using Aurora Read Replica section in this document You can migrate either a manual or an automated DB snapshot The general steps you must take are as follows: 1 Determine the amount of space that is required to migrate your Amazon RDS MySQL instance to an Aurora DB cluster For more information see the next section 2 Use the Amazon RDS console to create the snapshot in the Region where the Amazon RDS MySQL instance is located 3 Use the Migrate Databas e feature on the console to create an Amazon Aurora DB cluster that will be populated using the DB snapshot from the original DB instance of MySQL This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 14 Note: Some MyISAM tables m ight not convert without errors and may require manual changes For instance the InnoDB engine does not permit an autoincrement field to be part of a composite key Also spatial indexes are not currently supported Estimating space requirements for snapshot migration When you migrate a snapshot of a MySQL DB instance to an Aurora DB cluster Aurora uses an Amazon Elastic Block Store (Amazon EBS) volume to format the data from the snapshot before migrating it There are some cases where additional space is needed to for mat the data for migration The two features that can potentially cause space issues during migration are MyISAM tables and using the ROW_FORMAT=COMPRESSED option If you are not using either of these features in your source database then you can skip thi s section because you should not have space issues During migration MyISAM tables are converted to InnoDB and any compressed tables are uncompressed Consequently there must be adequate room for the additional copies of any such tables The size of the migration volume is based on the allocated size of the source MySQL database that the snapshot was made from Therefore if you have MyISAM or compressed tables that make up a small percentage of the overall database size and there is available space in th e original database then migration should succeed without encountering any space issues However if the original database would not have enough room to store a copy of converted MyISAM tables as well as another (uncompressed) copy of compressed tables t hen the migration volume will not be big enough In this situation you would need to modify the source Amazon RDS MySQL database to increase the database size allocation to make room for the additional copies of these tables take a new snapshot of the da tabase and then migrate the new snapshot When migrating data into your DB cluster observe the following guidelines and limitations: • Although Amazon Aurora supports up to 128 TB of storage the process of migrating a snapshot into an Aurora DB cluster is limited by the size of the Amazon EBS volume of the snapshot and therefore is limited to a maximum size of 16 TB This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 15 • NonMyISAM tables in the source database can be up to 16 TB in size However due to additional space requirements during conversion make s ure that none of the MyISAM and compressed tables being migrated from your MySQL DB instance exceed 8 TB in size You might want to modify your database schema (convert MyISAM tables to InnoDB and remove ROW_FORMAT=COMPRESSED ) prior to migrating it into Am azon Aurora This can be helpful in the following cases: • You want to speed up the migration process • You are unsure of how much space you need to provision • You have attempted to migrate your data and the migration has failed due to a lack of provisioned space Make sure that you are not making these changes in your production Amazon RDS MySQL database but rather on a database instance that was restored from your production snapshot For more details on doing this see Reducing the Amount of Space Required to Migrate Data into Amazon Aurora in the Amazon R elational Database Service User Guide Migrating a DB snapshot using the console You can migrate a DB snapshot of an Amazon RDS MySQL DB instance to create an Aurora DB cluster The new DB cluster is populated with the data from the original Amazon RDS MySQL DB instance The DB snapshot must have been made from an RDS DB instance runni ng MySQL 56 or 57 For information about creating a DB snapshot see Creating a DB snapshot in the Amazon RDS User Guide If the DB snapshot is not in the Region where you want to locate your Aurora DB cluster use the Amazon RDS console to copy the DB snapshot to that Region For information about copying a DB snapshot see Copying a snapshot in Amazon RDS User Guide To migrate a MySQL DB snapshot by using the console do the following: 1 Sign in to the AWS Management Console and open the Amazon RDS console (sign in requ ired) 2 Choose Snapshots 3 On the Snapshots page choose the Amazon RDS MySQL snapshot that you want to migrate into an Aurora DB cluster This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 16 4 Choose Migrate Database 5 On the Migrate Database page specify the values that match your environment and processing requirements as shown in the following illustration For descriptions of these options see Migrating an RDS for MySQL snapshot to Aurora in the Amazon RDS User Guide This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 17 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrati ng Your Databases to Amazon Aurora 18 Figure 2 — Snapshot migration 6 Choose Migrate to migrate your DB snapshot In the list of instances c hoose the appropriate arrow icon to show the DB cluster details and monitor the progress of the migration This details pa nel displays the cluster endpoint used to connect to the prima ry instance of the DB cluster For more information on connecting to an Amazon Aurora DB cluster see Connecting to an Amazon Aurora DB Cluster in the Amazon R elational Database Service User Guide Migration using Aurora Read Replica Aurora uses MySQL DB engines binary log replication functionality to create a special type of DB cluster called an Aurora read replica for a source MySQL DB instance Updates made to the source in stance are asynchronously replicated to Aurora Read Replica This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 19 We recommend creating an Aurora read replica of your source MySQL DB instance to migrate to an Aurora MySQL DB cluster with near zero downtime The migration process begins by creating a DB snaps hot of the existing DB Instance as the basis for a fresh Aurora Read Replica After the replica is set up replication is used to bring it up to date with respect to the source Once the replication lag drops to zero the replication is complete At this point you can promote the Aurora Read Replica into a standalone Aurora DB cluster and point your client applications to it Migration will take a while roughly several hours per tebibyte (TiB) of data Replication runs somewhat faster for InnoDB tables t han it does for MyISAM tables and also benefits from the presence of uncompressed tables If migration speed is a factor you can improve it by moving your MyISAM tables to InnoDB tables and uncompressing any compressed tables For further details refer to Migrating from a MySQL DB instance to Aurora MySQL using Read Replica in the Amazon RDS User Guide To use Aurora Read Replica to migrate from RDS MySQL your MySQL database must be running on Amazon RDS MySQL 56 or 57 This migration method does not work with on premises databases or databases running on Amazon Elastic C ompute Cloud (Amazon EC2) Also if you are running your Amazon RDS MySQL database on a version earlier than 56 you would need to upgrade it to 56 as a prerequisite Create a read replica using the Console 1 To migrate an existing RDS MySQL DB Instance s imply select the instance in the AWS Management RDS Console (sign in required) choose Instance Actions and choose Create Aurora read replica : This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 20 2 Specify the Values for the Aurora cluster See Replication with Amazon Aurora Monitor the progress of the migration in the console You can also look at the sequence of e vents in RDS events console 3 After the migration is complete wait for the Replica lag to reach zero on the new Aurora read replica to indicate that the replica is in sync with the source 4 Stop the flow of new transactions to the source MySQL DB instance 5 Promote the Aurora read replica to a standalone DB cluster This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Da tabases to Amazon Aurora 21 6 To see if the process is complete you can check Recent events for the new Aurora cluster: Now you can point your application to use the Aurora cluster’s reader and writer endpoints Migrating the database schema RDS DB s napshot migration migrates both the full schema and data to the new Aurora instance However if your source database location or application uptime requirements do not allow the use of RDS snapshot migratio n then you first need to migrate the database schema from the source database to the target database before you can move the actual data A database schema is a skeleton structure that represents the logical view of the entire database and typically incl udes the following : • Database storage objects — tables columns constraints indexes sequences userdefined types and data types • Database code objects — functions procedures packages triggers views materialized views events SQL scalar functions SQL inline functions SQL table functions attributes variables constants table types public types private types cursors exceptions parameters and other objects In most situations the database schema remains relatively static and therefore you don’t need downtime during the database schema migration step The schema from your source database can be extracted while your source database is up and ru nning without affecting the performance If your application or developers do make frequent changes to the database schema make sure that these changes are either paused while the migration is in process or are accounted for during the schema migration process This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 22 Depending on the type of your source database you can use the techniques discussed in the next sections to migrate the database schema As a prerequisite to schema migration you must have a target Aurora database created and available Homogen eous schema migration If your source database is MySQL 56 compliant and is running on Amazon RDS Amazon EC2 or outside AWS you can use native MySQL tools to export and import the schema • Exporting database schema — You can use the mysqldump client utility to export the database schema To run this utility you need to connect to your source database and redirect the output of mysqldump command to a flat file The –nodata option ensu res that only database schema is exported without any actual table data For the complete mysqldump command reference see mysqldump — A Database Backup Program mysqldump –u source_db_username –p nodata routines triggers –databases source_db_name > DBSchemasql • Importing database schema into Aurora — To import the schema to your Aurora instance connect to your Aurora database from a MySQL command line client (or a corresponding Windows client) and direct the contents of the export file into MySQL mysql –h aurora clusterendpoint u username p < DBSchemasql Note the following: • If your source database contains stored procedures triggers and views you need to remove DEFINER syntax from your dump file A simple Perl command to do that is given below Doing this creates all triggers views and sto red procedures with the current connected user as DEFINER Be sure to evaluate any security implications this might have $perl pe 's/\sDEFINER=`[^`]+`@`[^`]+`//' < DBSchemasql > DBSchemaWithoutDEFINERsql This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 23 • Amazon Aurora supports InnoDB tables only If y ou have MyISAM tables in your source database Aurora automatically changes the engine to InnoDB when the CREATE TABLE command is run • Amazon Aurora does not support compressed tables (that is tables created with ROW_FORMAT=COMPRESSED ) If you have compr essed tables in your source database Aurora automatically changes ROW_FORMAT to COMPACT when the CREATE TABLE command is run Once you have successfully imported the schema into Am azon Aurora from your MySQL 56 compliant source database the next step is to copy the actual data from the source to the target For more information s ee the Introduction and General Approach to AWS DMS later in this paper Heterogeneous schema migration If your source database isn’t MySQL compatible you must convert your schema to a format compatible with Amazon Aurora Schema conversion from one database engine to another database engine is a nontrivial task and may involve rewriting certain parts of your database and application c ode You have two main options for converting and migrating your schema to Amazon Aurora: • AWS Schema Conversion Tool — The AWS Schema Conversion Tool makes heterogeneous database migrations easy by automatically converting the source database schema and a majority of the custom code including views stored procedures and functions to a format compatible with the target database Any code that cannot be automatically converted is clearly marked so that it can be manually converted You can use this tool to convert your source databases running on either Oracle or Microsoft SQL Server to an Amazon Aurora MySQL or PostgreSQL target database in either A mazon RDS or Amazon EC2 Using the AWS Schema Conversion Tool to convert your Oracle SQL Server or PostgreSQL schema to an Aurora compatible format is the preferred method This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 24 • Manual schema migration and third party tools — If your source database is not O racle SQL Server or PostgreSQL you can either manually migrate your source database schema to Aurora or use third party tools to migrate schema to a format that is compatible with MySQL 56 Manual schema migration can be a fairly involved process depen ding on the size and complexity of your source schema In most cases however manual schema conversion is worth the effort given the cost savings performance benefits and other improvements that come with Amazon Aurora Schema migration using the AWS Sc hema Conversion Tool The AWS Schema Conversion Tool provides a project based user interface to automatically convert the database schema of your source database into a format that is compatible with Amazon Aurora It is highly recommended that you use AWS Schema Conversion Tool to evaluate the database migration effort and for pilot migration before the actual production migration The following description walks you through the high level steps of using AWS the Schema Conversion Tool For detailed instruc tions see the AWS Schema Conversion Tool User Guide 1 First install the t ool The AWS Schema Conversion Tool is available for th e Microsoft Windows macOS X Ubuntu Linux and Fedora Linux Detailed download and installation instructions can be found in the installation and update section of the user guide Where you install AWS Schema Conversion Tool is important The tool need s to connect to both source and target databases directly in order to convert and apply schema Make sure that the desktop where you install AWS Schema Conv ersion Tool has network connectivity with the source and target databases 2 Install JDBC drivers The AWS Schema Conversion Tool uses JDBC drivers to connect to the source and target databases In order to use this tool you must download these JDBC drivers to your local desktop Instructions for driver download can be found at Installing the required database drivers in the AWS Schema Conversion Tool User Guide Also check the AWS forum for AWS Schema Conversion Tool for instructions on setting up JDBC drivers for different database engines This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 25 3 Create a target database Create an Amazon Aurora target database For instructions on creating an Amazon Aurora database see Creating an Amazon Aurora DB Cluster in the Amazon RDS User Guide 4 Open the AWS Schema Conversion Tool and start the Ne w Project Wizard Figure 3 — Create a new AWS Schema Conversion Tool project 5 Configure the source database and test connectivity between AWS Schema Conversion Tool and the source database Your source database must be reachable from your desktop for this to work so make sure that you have the appropriate network and firewall setti ngs in place This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 26 Figure 4 — Create New Database Migration Project wizard 6 In the next screen select the schema of your source database that you want to convert to Amazon Aurora Figure 5 — Select Schema step of the migration wizard This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Au rora 27 7 Run the d atabase migration assessment report This report provides important information regarding the conversion of the schema from your source database to your target Amazon Aurora instance It summarizes all of the sc hema conversion tasks and details the action items for parts of the schema that cannot be automatically converted to Aurora The report also includes estimates of the amount of effort that it will take to write the equivalent code in your target database t hat could not be automatically converted 8 Choose Next to configure the target database You can view this migration report again later Figure 6 — Migration report 9 Configure the target Amazon Aurora database and test connectivi ty between the AWS Schema Conversion Tool and the source database Your target database must be reachable from your desktop for this to work so make sure that you have appropriate network and firewall settings in place 10 Choose Finish to go to the project window 11 Once you are at the project window you have already established a connection to the source and target database and are now ready to evaluate the detailed assessment report and migrate the schema This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 28 12 In the left panel that displays the schema from y our source database choose a schema object to create an assessment report for Right click the object and choose Create Report Figure 7 — Create migration report The Summary tab displays the summary information from the database migration assessment report It shows items that were automatically converted and items that could not be automatically converted For schema items that could not be automatically converted to the tar get database engine the summary includes an estimate of the effort that it would take to create a schema that is equivalent to your source database in your target DB instance The report categorizes the estimated time to convert these schema items as foll ows: • Simple – Actions that can be completed in less than one hour • Medium – Actions that are more complex and can be completed in one to four hours • Significant – Actions that are very complex and will take more than four hours to complete This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 29 Figure 8 — Migration report Important: If you are evaluating the effort required for your database migration project this assessment report is an important artifact to consider Study the assessment report in details to determine what code changes are required in the database schema and what impact the changes might have on your application functionality and design 13 The next step is to convert the schema The converted schema is not immediately applied to the target database Inst ead it is stored locally until you explicitly apply the converted schema to the target database To convert the schema from your source database choose a schema object to convert from the left panel of your project Right click the object and choose Conv ert schema as shown in the following illustration This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 30 Figure 9 — Convert schema This action add s converted schema to the right panel of the project window and show s objects that were automatically converted by the AWS Schema Conversion Tool You can respond to the action items in the assessment report in different ways: • Add equivalent schema manually — You can write the portion of the schema that can be automatically converted to your target DB instance by choosing Apply to database in the right panel of your project The schema that is written to your target DB instance won't contain the items that couldn't be automatically converted Those items are listed in your d atabase migration assessment report After applying the schema to your target DB instance you can then manually create the schema in your target DB instance for the items that could not be automatically converted In some cases you may not be able to cre ate an equivalent schema in your target DB instance You might need to redesign a portion of your application and database to use the functionality that is available from the DB engine for your target DB instance In other cases you can simply ignore the schema that can't be automatically converted Caution: If you manually create the schema in your target DB instance do not choose Apply to database until after you have saved a copy of any manual work that you have done Applying the schema from your project to your target DB instance overwrites schema of the same name in the target DB instance and you lose any updates that you added manually This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 31 • Modify your source database schema and refresh the schema in your project — For some items you might be best ser ved to modify the database schema in your source database to the schema that is compatible with your application architecture and that can also be automatically converted to the DB engine of your target DB instance After updating the schema in your source database and verifying that the updates are compatible with your application choose Refresh from Database in the left panel of your project to update the schema from your source database You can then convert your updated schema and generate the database migration assessment report again The action item for your updated schema no longer appears 14 When you are ready to apply your converted schema to your target Aurora instance choose the schema element from the right panel of your project Right click the schema element and choose Apply to database as shown in the following figure Figure 10 — Apply schema to database This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 32 Note: The first time that you apply your converted schema to your target DB instance the AWS Schema Conversion Tool adds an additional schema (AWS_ORACLE_EXT or AWS_SQLSERVER_EXT ) to your target DB instance This schema implements system functions of the sourc e database that are required when writing your converted schema to your target DB instance Do not modify this schema or you might encounter unexpected results in the converted schema that is written to your target DB instance When your schema is fully m igrated to your target DB instance and you no longer need the AWS Schema Conversion Tool you can delete the AWS_ORACLE_EXT or AWS_SQLSERVER_EXT schema The AWS Schema Conversion Tool is an easy touse addition to your migration toolkit For additional be st practices related to AWS Schema Conversion Tool see the Best practices for the AWS SCT topic in the AWS Schema Conversion Tool User Guide Migrat ing data After the database schema has been copied from the source database to the target Aurora database the next step is to migrate actual data from source to target While data migration can be accomplished using different tools we recommend moving data using the AWS Database Migration Service (AWS DMS) as it provides both the simplicity and the features needed for the task at hand Introduction and general approach to AWS DMS The AWS Database Migration Service ( AWS DMS) makes it easy for customers to migrate production databases to AWS with minimal downtime You can keep your applications running while you are migrati ng your database In addition the AWS Database Migration Service ensures that data changes to the source database that occur during and after the migration are continuously replicated to the target Migration tasks can be set up in minutes in the AWS Mana gement Console The AWS Database Migration Service can migrate your data to and from widely used database platforms such as Oracle SQL Server MySQL PostgreSQL Amazon Aurora MariaDB and Amazon Redshift The service supports homogenous migrations such as Oracle to Oracle as well as heterogeneous migrations between different database platforms such as Oracle to Amazon Aurora or SQL Server to MySQL You can perform one time migrations or you This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 33 can maintain continuous replication between databases without a customer having to install or configure any complex software AWS DMS works with databases that are on premise s running on Amazon EC2 or running on Amazon RDS However AWS DMS does not work in situation s where both the source database and the target database are on premise s; one endpoint must be in AWS AWS DMS supports specific versions of Oracle SQL Server Amazon Aurora MySQL and PostgreSQL For currently supported versions see the Sources for data migration However this whitepaper is just focusing on Amazon Aurora as a migration target Migration methods AWS DMS provides three methods for migrating data: • Migrate existing data — This method creates the tables in the target database automatically defines the metadata that is required at the target and populates the tables with data from the source database (also referred to as a “ full load”) The data from the tables is loaded in parallel for improved efficiency Tables are only created in case of homogenous migrations and secondary indexes aren’t created automatically by AWS DMS Read further for details • Migrate existing data and replicate ongoing change s — This method does a full load as described above and in addition captures any ongoing changes being made to the source database during the full load and stores them on the replication instance Once the full load is complete the stored changes are applied to the destination database until it has been brought up to date with the source database Additionally any ongoing changes being made to the source database continue to be replicated to the destinatio n database to keep them in sync This migration method is very useful when you want to perform a database migration with very little downtime • Replicate data changes only — This method just reads changes from the recovery log file of the source database and applies these changes to the target database o n an ongoing basis If the target database is unavailable these changes are buffered on the replication instance until the target becomes available This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 34 • When AWS DMS is performing a full load migration the processing put s a load on the tables in the source d atabase which could affect the performance of applications that are hitting this database at the same time If this is an issue and you cannot shut down your applications during the migration you can consider the following approaches: o Running the migrat ion at a time when the application load on the database is at its lowest point o Creating a read replica of your source database and then performing the AWS DMS migration from the read replica Migration procedure The general outline for using AWS DMS is as follows: 1 Create a target database 2 Copy the schema 3 Create an AWS DMS replication instance 4 Define the database source and target endpoints 5 Create and run a migration task Create target database Create your target Amazon Aurora database cluster using th e procedure outlined in Creating an Amazon Aurora DB Cluster You should create the target database in the Region and with an instance type that matches your business requirements Also to improve the performance of the migration verify that your target database does not have Multi AZ deployment enabled ; you can enable that once the load has finish ed Copy schema Additionally you should create the schema in this target database AWS DMS supports basic schema migration including the creation of tables and primary keys However AWS DMS doesn't automatically create secondary indexes foreign keys stored proced ures user accounts and so on in the target database For full schema migration details s ee the Migrating the Database Schema section This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 35 Create an AWS DMS replication instance In order to use the AWS DMS servi ce you must create a n AWS DMS replication instance which runs in your VPC This instance read s the data from the source database perform s the specified table mappings and write s the data to the target database In general using a larger replication in stance size speed s up the database migration (although the migration can also be gated by other factors such as the capacity of the source and target databases connection latency and so on ) Also your replication instance can be stopped once your datab ase migration is complete Figure 11 — AWS Database Migration Service AWS DMS currently supports burstable compute and memory optimized instance classes for replication instances The burstable instance classes are low cost standard instances designed to provide a baseline level of CPU performance with the ability to burst above the baseline They are suitable for developing con figuring and testing your database migration process as well as for periodic data migration tasks that can benefit from the CPU burst capability The compute optimized instance classes are designed to deliver the highest level of processor performance an d achieve significantly higher packet per second (PPS) performance lower network jitter and lower network latency You should use this instance class if you are performing large heterogeneous migrations and want to minimize the migration time The memor yoptimized instance classes are designed for migrations or replications of highthroughput transaction systems which can consume large amounts of CPU and memory AWS DMS Storage is primarily consumed by log files and cached transactions Normally doing a full load does not require a significant amount of instance storage on your AWS DMS replication instance However if you are doing replication along with your full load then the changes to the source database are stored on the AWS DMS replication insta nce while the full load is taking place If you are migrating a very large This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 36 source database that is also receiving a lot of updates while the migration is in progress then a significant amount of instance storage could be consumed The instances come with 50 GB of instance storage but can be scaled up as appropriate Normally this amount of storage should be more than adequate for most migration scenarios However it's always a good idea to pay attention to storage related metrics Make sure to scale up your storage if you find you are consuming more than the default allocation Also in some extreme cases where very large databases with very high transaction rates are being migrated with replication enabled it is possible that the AWS DMS replication ma y not be able to catch up in time If you encounter this situation it may be necessary to stop the changes to the source database for some number of minutes in order for the replication to catch up before you repoint your application to the target Aurora DB This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 37 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 38 Figure 12 — Create replication instance page in the AWS DMS console Define database source and target endpoints A database endpoint is used by the replication instance to connect to a database To perform a database migrati on you must create both a source database endpoint and a target database endpoint The specified database endpoints can be on premise s running on Amazon EC2 or running on Amazon RDS but the source and target cannot both be on premise s We highly recommended that you test your database endpoint connection after you define it The same page used to create a database endpoint can also be used to test it as explained later in this paper Note: If you have foreign key constraints in your sourc e schema when creating your target endpoint you need to enter the following for Extra connection attributes in the Advanced section: initstmt=SET FOREIGN_KEY_CHECKS=0 This disables the foreign key checks while the target tables are being loaded This in t urn prevents the load from being interrupted by failed foreign key checks on partially loaded tables This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 39 Figure 13 — Create database endpoint page in the AWS DMS console This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 40 Create and run a migration task Now that you have created and tested your source database endpoint and your target database endpoint you can create a task to do the data migration When you create a task you specify the replication instance that you have created the database migration method type (discussed earlie r) the source database endpoint and your target database endpoint for your Amazon Aurora database cluster Also under Task Settings if you have already created the full schema in the target database then you should change the Target table preparation mode to Do nothing rather than using the default value of Drop tables on target The latter can cause you to lose aspects of your schema definition like foreign key constraints when it drops and recreates tables When creating a task you can create table mappings that specify the source schema along with the corresponding tables to be migrated to the target endpoint The default mapping method migrate s all source tables to target tables of the same name if they exist Otherwise i t create s the source table(s) on the target (depending on your task settings) Additionally you can create custom mappings (using a JSON file) if you want to migrate only certain tables or if you want to have more control over the field and table mapping process You can also choose to migrate only one schema or all schemas from your source endpoint This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 41 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 42 Figure 14 — Create task page in the AWS DMS console This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migra ting Your Databases to Amazon Aurora 43 You can use the AWS Management Console to monitor the progress of your AWS Database Migration Service (AWS DMS) tasks You can also monitor the resources and network connectivity used The AWS DMS console shows basic statistics for each task including the task status percent complete elapsed ti me and table statistics as the following image shows Additionally you can select a task and display performance metrics for that task including throughput records per second migrated disk and memory use and latency Figure 15 — Task status in AWS DMS console Testing and cutover Once the schema and data have been successfully migrated from the source database to Amazon Aurora you are now ready to perform end toend testing of your migration process The testing approach should be refined after each test migration and the final migration plan should include a test plan that ensures adequate testing of the migrated database This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 44 Migration testing Table 2 — Migration testing Test Category Purpose Basic acceptance tests These pre cutover tests should be automatically run upon completion of the data migration process Their primary purpose is to verify whether the data migration was successful Following are s ome common outputs from these tests: • Total number of items processed • Total number of items imported • Total number of items skipped • Total number of warnings • Total number of errors If any of these totals reported by the tests deviate from the expected values then it means the migration was not succes sful and the issues need to be resolved before moving to the next step in the process or the next round of testing Functional tests These post cutover tests exercise the functionality of the application(s) using Aurora for data storage They include a combination of automated and manual tests The primary purpose of the functional tests is to identify problems in the application caused by the migration of the data to Aurora Nonfunctional tests These post cutover tests assess the nonfunctional characte ristics of the application such as performance under varying levels of load User acceptance tests These post cutover tests should be run by the end users of the application once the final data migration and cutover is complete The purpose of these tests is for the end users to decide if the application is sufficiently usable to meet its primary function in the organization Cutover Once you have com pleted the final migration and testing it is time to point your application to the Amazon Aurora database This phase of migration is known as This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 45 cutover If the planning and testing phase has been run properly cutover should not lead to unexpected issues Precutover actions • Choose a cutover window — Identify a block of time when you can accomplish cutover to the new database with minimum disruption to the business Normally you would select a low activity period for the database (typically nights and/or weekends) • Make sure changes are caught up — If a near zero downtime migration approach was used to replicate database changes from the source to the target database make sure that all database changes are caught up and your target database is not significa ntly lagging behind the source database • Prepare scripts to make the application configuration changes — In order to accomplish the cutover you need to modify database connection details in your application configuration files Large and complex applicati ons may require updates to connection details in multiple places Make sure you have the necessary scripts ready to update the connection configuration quickly and reliably • Stop the application — Stop the application processes on the source database and p ut the source database in read only mode so that no further writes can be made to the source database If the source database changes aren’t fully caught up with the target database wait for some time while these changes are fully propagated to the target database • Run precutover tests — Run automated pre cutover tests to make sure that the data migration was successful Cutover • Run cutover — If pre cutover checks were completed successfully you can now point your application to Amazon Aurora Run scripts created in the pre cutover phase to change the application configuration to point to the new Aurora database • Start your application — At this point you may start your application If you have an ability to stop users from accessing the application w hile the application is running exercise that option until you have run your post cutover checks This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 46 Post cutover checks • Run post cutover tests — Run predefined automated or manual test cases to make sure your application works as expected with the new database It’s a good strategy to start testing read only functionality of the database first before running tests that write to the database • Enable user access and closely monitor — If your test cases were run successfully you may give user access to the application to complete the migration process Both application and database should be closely monitored at this time Conclusion Amazon Aurora is a high performance highly available and enterprise grade database built for the cloud Leveraging Amazon Aurora can result in better performance and great er availability than other open source databases and lower costs than most commercial grade databases This paper propose s strategies for identifying the best method to migrate databases to Amazon Aurora and detail s the procedures for planning and completing those migrations In particular AWS Database Migration Service ( AWS DMS) as well as the AWS Schema Conversion Tool are the recommended tools for heterogeneous migration scenarios These powerful tools can greatly reduce the cost and complexity of database migrations Contributors Contributors to this documen t include : • Puneet Agarwal Solutions Architect Amazon Web Services • Chetan Nandikanti Database Specialist Solutions Architect Amazon Web Services • Scott Williams Solutions Architect Amazon Web Services Jonathan Doe Solutions Architect Amazon Web Servi ces Further reading For additional information see: • Amazon Aurora Product Details This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Migrating Your Databases to Amazon Aurora 47 • Amazon Aurora FAQs • AWS Database Migration Service • AWS Database Migration Service FAQs Document history Date Description July 28 2021 Reviewed for technical accuracy June 10 2016 First publication
General
Modernize_Your_Microsoft_Applications_on_AWS
ArchivedModernize Your Microsoft Applications on Amazon Web Services How to Start Your Journey March 201 6 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchivedAmazon Web Services – Modernize Your Microsoft Applications on AWS March 2016 Page 2 of 14 © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – Modernize Your Microsoft Applications on AWS March 2016 Page 3 of 14 Contents Abstract 3 Why Modernize Applications? 4 Why Run Microsoft Applications on AWS? 5 AWS for Corporate Applications 5 AWS for LoB Applications and Databases 5 AWS for Developers 5 Which Microsoft Applications Can I Run on AWS? 6 How Do I Get Started? 6 Security and Access 7 Compute: Windows Server Running on EC2 Instances 9 Databases: SQL Server Running on Amazon RDS or EC2 10 Management Services: Amazon CloudWatch AWS CloudTrail Run Command 11 Complete the Solution with the AWS Marketplace 12 Licensing Considerations 13 Conclusion 14 Abstract The cloud is now the center of most enterprise IT strategies Many enterprises find that a well planned “lift and shift” move to the cloud results in an immediate business payoff This whitepaper is intended for IT pros and business decision makers in Microsoftcentric organizations who want to take a cloudbased approach to IT and must modernize existing businesscritical applications built on Microsoft Windows Server and Microsoft SQL Server This paper covers the benefits of modernizing applications on Amazon Web Services (AWS) and how to get started on the journey ArchivedAmazon Web Services – Modernize Your Microsoft Applications on AWS March 2016 Page 4 of 14 Why Modernize Applications? For m any IT organizations application modernization is a major initiative for a few major reasons:  Move off legacy software To avoid the time cost and performance and reliability challenges of maintaining legacy software and unsupported versions (Windows Server 2003 SQL Server 2003 and SQL Server 2005)  DevOps Initiatives To take advantage of new DevOps and application lifecycle management methodologies By moving to new application delivery platforms companies can increase the speed of innovation  Mobility initiatives As users move to mobile devices the use of IT services can increase by one or more orders of magnitude This poses scalability challenges if an application is not prepared for that kind of growth  New product launches New product launches can cause rapid spikes in demand for IT The underlying applications including Microsoft SQL Server and Microsoft SharePoint must be ready with the scale required to support the launch  Mergers and acquisitions (M&A) activity In the case of mergers and acquisitions complexity builds up over time After multiple acquisitions a company may find itself in possession of several hundred SharePoint sites multiple Exchange instances and countless SQL Server databases Streamlining the management of disparate applications is often a huge undertaking ArchivedAmazon Web Services – Modernize Your Microsoft Applications on AWS March 2016 Page 5 of 14 Why Run Microsoft Applications on AWS? In a recent survey1 International Data Corporation (IDC) reported that 50 percent of respondents were using AWS to support productivity applications like those from Microsoft Of that number 65 percent said they planned to increase their use of AWS either to move existing applications or to expand applications already running on AWS Clearly customers are already making the move to modernize their Microsoft applications AWS for Corporate Applications Customers can improve their security posture and application performance and reliability by running corporate applications built on Microsoft Windows Server in the AWS cloud For example customers can deploy a globally accessible SharePoint environment in any of the 33 AWS Availability Zones in a matter of hours To reduce complexity customers can use AWS tools that integrate with Microsoft management and access control applications like System Center and Active Directory Customers can also use AWS CloudFormation templates to perform application deployments reliably and repeatedly AWS for LOB Applications and Databases Line of business (LOB) owners are running applications in areas as diverse as oil and gas exploration retail point of sale (POS) finance health care insurance pharmaceuticals media and entertainment and more To accelerate and simplify the time to deployment customers can launch preconfigured Amazon Machine Image (AMI) templates with fully compliant Microsoft Windows Server and Microsoft SQL Server licenses included AWS for Developers Customers who develop on AWS have access to Microsoft development tools including Visual Studio PowerShell and the NET Developer Center When these tools are combin ed with scalability and agility of AWS CodeDeploy AWS Elastic 1 http://wwwidccom/getdocjsp?containerId=256654 ArchivedAmazon Web Services – Modernize Your Microsoft Applications on AWS March 2016 Page 6 of 14 Beanstalk (Elastic Beanstalk) and AWS OpsWorks customers can complete and deploy code on AWS much faster and with lower risk Which Microsoft Applications Can I Run on AWS? Customers have successfully deployed virtually every Microsoft application to the AWS cloud including:  Microsoft Windows Server  Microsoft SQL Server  Microsoft Active Directory  Microsoft Exchange Server  Microsoft Dynamics CRM and Dynamics AX Dynamics ERP  Microsoft SharePoint Server  Microsoft System Center  Skype for Business (formerly Microsoft Lync)  Microsoft Project Server  Microsoft Visual Studio Team Foundation Server  Microsoft BizTalk Server  Microsoft Remote Desktop Services How Do I Get Started? For enterprises the first step is to determine which of the more than 50 AWS services will be used to support their application modernization initiative The following figure shows how the typical functions of an enterprise IT organization map to AWS offerings This paper discusses some of the key services in this map and how they fit into a Microsoft application modernization initiative ArchivedAmazon Web Services – Modernize Your Microsoft Applications on AWS March 2016 Page 7 of 14 Figure 1: A Conceptual Map of Enterprise IT with Amazon Web Services Security and Access We worked with AW S to develop a security model that allows us to be more secure in AWS than we can be even in our own data centers — Rob Alexander CIO Capital One With the increasing concern and focus on security most customers start here by choosing services that ensure compliance and manage risk The same security isolations found in a traditional data center are used in the AWS cloud including physical security separation of the network isolation of server hardware and isolation of storage AWS has achieved ISO 27001 certification and has been validated as a Level 1 service provider under the Payment Card Industry (PCI) Data Security Standard (DSS) AWS undergo es annual Service Organization Control (SOC) 1 audits and has been successfully evaluated at the Moderate level for federal government systems a nd Department of Defense Information Assurance Certification and Accreditation Process (DICAP) Level 2 for Department of Defense ( DOD) systems ArchivedAmazon Web Services – Modernize Your Microsoft Applications on AWS March 2016 Page 8 of 14 For many enterprises considering the right set of services for security and permissions AWS virtual private networks AWS Direct Connect and AWS Directory Services are at the heart of the discussion Amazon Virtual Private Cloud (Amazon VPC) lets customers launch AWS resources into a virtual network that they've defined This virtual network closely resembles a traditional network in an onpremises data center but with the benefits of the scalable infrastructure of AWS AWS Direct Connect links the organization’s internal network to AWS over a private 1 gigabit or 10 gigabit Ethernet fiberoptic cable One end of the cable is connected to the data center router the other to an AWS Direct Connect router With this encrypted connection in place customers can create virtual interfaces directly to the AWS cloud (for example to Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3)) and to Amazon VPC bypassing Internet service providers in the network path AWS Directory Service is a managed service that makes it easy to connect AWS services to existing onpremises Microsoft Active Directory (through the use of AD Connector) or to set up and operate a new directory in the AWS cloud (through the use of Simple AD and AWS Directory Service for Microsoft Active Directory) Data encryption services are provided for data in flight (through SSL) and at rest through options for both serverside and clientside encryption AWS Certificate Manager (ACM) AWS Key Management Service (AWS KMS) and AWS CloudHSM can be used together to ensure key and certificate management services are provided to securely generate store and manage cryptographic keys used for data encryption Finally AWS WAF provides web application firewall services to help protect web applications from common web exploits that could affect application availability compromise security or consume excessive resources ArchivedAmazon Web Services – Modernize Your Microsoft Applications on AWS March 2016 Page 9 of 14 Compute: Windows Server Running on EC2 Instances We didn’t have time to redesign applications AWS could support our legacy 32 bit a pplications on Windows Server 2003 a variety of Microsoft SQL Server and Oracle databases and a robust Citrix environment — Jim McDonald Lead Architect Hess After a security strategy is in place it’s time to look at the infrastructure that will support the applications that will be modernized Amazon EC2 is a web service that provides resizable computing capacity that is used to build and host software systems When designing Windows applications to run on Amazon EC2 customers can plan for rapid deployment and rapid reduction of compute and storage resources based on changing needs When customers run Windows Server on an EC2 instance they don't need to provision the exact system package of hardware virtualization software and storage the way they do with Windows Server onpremises Instead they can focus on using a variety of cloud resources to improve the scalability and overall performance of the Windows applications After an Amazon EC2 instance running Windows Server is launched it behaves like a traditional server running Windows Server For example whether Windows Server is deployed onpremises or on an Amazon EC2 instance it can run web applications conduct batch processing or manage applications requiring largescale computations Customers can remote directly into Windows Server instances using Remote Desktop Protocol for easy management They can run PowerShell scripts against a single Windows Server instance or against an entire fleet using the Amazon EC2 Run Command Applications built for Amazon EC2 use the underlying computing infrastructure on an asneeded basis They draw on resources (such as storage and computing) ArchivedAmazon Web Services – Modernize Your Microsoft Applications on AWS March 2016 Page 10 of 14 on demand in order to perform a job and relinquish the resources when done In addition they often terminate themselves after the job is done While in operation the application scales up and down elastically based on resource requirements Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances in the cloud This enables customers to achieve more fault tolerance in applications seamlessly providing the required amount of load balancing capacity required to distribute application traffic Auto Scaling lets customers follow the demand curve for applications very closely reducing the need to manually provision capacity in advance For example customers can set a condition to add new Amazon EC2 instances to the Auto Scaling group in increments when the average utilization of the Amazon EC2 fleet is high; similarly they can set a condition to remove instances in the same increments when CPU utilization is low Databases: SQL Server Running on Amazon RDS or Amazon EC2 Amazon Relational Database Service ( Amazon RDS) allows our DBA team to focus less o n the day today maintenance and use their time to work on enhancements And Elastic Load Balancing has allowed us to move away from expensive and complicated load balancers and retain the required functionality — Chad Marino Dir ector of Technology Services Kaplan Another key building block in modernization planning is the choice of database services Customers who want to manage scale and tune SQL Server deployments in the cloud can use Amazon RDS or run SQL Server on Amazon EC2 ArchivedAmazon Web Services – Modernize Your Microsoft Applications on AWS March 2016 Page 11 of 14 Customers who prefer to let AWS handle the day today management of SQL Server databases choose Amazon RDS because the service makes it easy to set up operate and scale a relational database in the cloud Amazon RDS automates installation disk provisioning and management patching minor version upgrades failed instance replacement and backup and recovery of SQL Server databases Amazon RDS also offers automated synchronous replication acros s multiple Availability Zones (Multi AZ) for a highly available and scalable environment fully managed by AWS This allows customers to focus on higher level tasks such as schema optimization query tuning and application development and eliminate the undifferentiating work that goes into maintenance and operation of the databases Amazon RDS for SQL Server supports Windows Authentication making it easier for customers to access and manage Amazon RDS for SQL Server instances Amazon RDS for SQL Server supports Microsoft SQL Server Express Web Standard and Enterprise Editions SQL Server Express is available at no additional licensing cost and is suitable for small workloads or proof ofconcept deployments SQL Server Web Edition is best for public and Internet accessible web workloads SQL Server Standard Edition is suitable for most SQL Server workloads and can be deployed in a MultiAZ mode SQL Server Enterprise Edition is the most featurerich edition of SQL Server and can also be deployed in Multi AZ mode Management Services: Amazon CloudWatch AWS CloudTrail Run Command The way CSS automated launching instance s reduced the time to launch a project by about 75 percen t What used to take fou r days now only takes one day We’re not rebuilding web and database server s from the ground up all the time We can just clone and reuse images — Nick Morgan Enterprise Architect Unilever ArchivedAmazon Web Services – Modernize Your Microsoft Applications on AWS March 2016 Page 12 of 14 AWS provides a comprehensive set of management services for the enterprise:  Amazon CloudWatch : Customers can use Amazon CloudWatch to monitor in real time AWS resources and applications running on AWS CloudWatch alarms send notifications or based on rules that customers define make changes automatically to the monitored resources  AWS CloudTrail : With AWS CloudTrail customers can monitor their AWS deployments in the cloud by getting a history of AWS API calls made in their account including API calls made through the AWS Management Console the AWS SDKs command line tools and higherlevel AWS services Customers can also identify which users and accounts called AWS APIs for services that support CloudTrail the source IP address from which the calls were made and when the calls occurred CloudTrail can be integrated into applications using the API to automate trail creation for the organization check the status of trails and control how administrators turn CloudTrail logging on and off  Amazon EC2 Run Command : For automating common administrative tasks like patch management or configuration updates that apply across hundreds of virtual machines customers can use the Amazon EC2 Run Command which provides a simple method for running PowerShell scripts The Run Command is integrated with AWS Identity and Access Management (IAM) solutions to ensure administrators have access to updates for only th ose machines they own All updates are audited through AWS CloudTrail AWS addins for Microsoft System Center extend the functionality of existing System Center implementations for use with Microsoft System Center Operations Manager and Microsoft System Center Virtual Machine Manager After installation customers can use the familiar System Center interface to view and manage Amazon EC2 for Microsoft Windows Server resources in the AWS cloud as well as Windows Servers installed onpremises Complete the Solution with the AWS Marketplace Customers often have a preferred ISV for specialized software solutions for enhanced security business intelligence storage and more AWS Marketplace is an online store that makes it easy for customers to discover purchase and deploy the software and services they need to build solutions and run their businesses ArchivedAmazon Web Services – Modernize Your Microsoft Applications on AWS March 2016 Page 13 of 14 With more than 2600 listings across more than 35 categories the AWS Marketplace simplifies software licensing and procurement by enabling customers to accept user agreements choose pricing options and automate the deployment of software and associated AWS resources with just a few clicks AWS Marketplace also simplifies billing for customers by delivering a single invoice detailing business software and AWS resource usage on a monthly basis The AWS Marketplace includes offerings from SAP Tableau NetApp Trend Micro F5 Networks and many more Customers have access to Microsoft applications such as Microsoft Windows Server Microsoft SQL Server and Microsoft SharePoint custom AMIs through Marketplace partners Licensing Considerations Customers have options for using new and existing Microsoft software licenses in the AWS cloud For new applications customers can purchase Amazon EC2 or Amazon RDS instances with a license included With this approach customers get new fully compliant Windows Server and SQL Server licenses directly from AWS Customers can use them on a “pay as you go” basis with no upfront costs or longterm investments Customers can choose from AMIs with just Microsoft Windows Server or with Windows Server and Microsoft SQL Server already installed Client access licenses (CALs) are included Customers who have already purchased Microsoft software have a “bring your own license” (BYOL) option which is allowed by Microsoft under the Microsoft License Mobility policy through Software Assurance Microsoft’s License Mobility program allows customers who already own Windows Server or Microsoft SQL Server licenses to run their deployment on Amazon EC2 and Amazon RDS This benefit is available to Microsoft Volume Licensing (VL) customers with Windows Server and SQL Server licenses (currently including Standard and Enterprise Editions) covered by Microsoft Software Assurance contracts In cases where t he customer’s license agreement requires control to the socket core or perVM level customers can use Amazon EC2 Dedicated Hosts which provide the customer with hardware that to track license consumption and compliance and report it to Microsoft or ISV s ArchivedAmazon Web Services – Modernize Your Microsoft Applications on AWS March 2016 Page 14 of 14 Conclusion This paper describes the benefits of modernizing your applications on Amazon Web Services and how you can get started on the journey It shows how you can benefit from running corporate applications LOB and database applications or developing new applications using the AWS platform for your modernization initiative We recommend the AWS services that you should look to start the process of modernizing your applications on AWS
General
Right_Sizing_Provisioning_Instances_to_Match_Workloads
Right Sizing Provisioning Instances to Match Workloads January 2020 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 Right Size Before Migrating 1 Right Sizing is an Ongoing Process 1 Overview of Amazon EC2 and Amazon RDS Instance Families 2 Identifying Opportunities to Right Size 4 Tools for Right Sizing 4 Tips for Developing Your Own Right Sizing Tools 5 Tips for Right Sizing 6 Right Size Using Performance Data 6 Right Size Based on Usage Needs 8 Right Size by Turning Off Idle Instances 8 Right Size by Selecting the Right Instance Family 9 Right Size Your Database Instances 10 Conclusion 10 Contributors 11 Document Revisions 11 Abstract This is the seventh in a series of whitepapers designed to support your cloud journey This paper seeks to empower you to maximize value from you r investments improve forecasting accuracy and cost predictability create a culture of ownership and cost transparency and continuously me asure your optimization status This paper discusses how to provision instances to match your workload performance and capacity requirements to optimize costs Amazon Web Services – Right Sizing : Provisioning Instances to Match Workloads Page 1 Introduction Right sizing is the process of matching instance types and sizes to your workload performance and capacity requirements at the lowest possible cost It’s also the process of looking at deployed instances and identifying opportunities to eliminate or downsize without compromising capacity or other requirem ents which result s in lower costs Right sizing is a key mechanism for optimizing AWS costs but it is often ignored by organizations when they first move to the AWS Cloud They lift and shift their environments and expect to right size later Speed and performance are often prioritized over cost which result s in oversized instances and a lot of wasted spend on un used resources Right Siz e Before Migrati ng One reason for the waste is the mindset to overprovision that many IT professionals bring with them when they build their cloud infrastructure Historically IT departments have had to provision for peak demand However cloud environments minimize costs because capacity is provisioned based on averag e usage rather than peak usage When you learn how to right size you can save up to 70 % percent on your monthly bill The key to right sizing is to understand precis ely your organization’s usage needs and patterns and know how to take advantage of the elasticity of the AWS Cloud to respond to those needs By right sizing before a migration you can significantly reduce your infrastructure costs If you skip right sizing to save time your migration speed might be faster but you will end up with higher cloud infrastructure spend for a potentially long time Right Sizing is a n Ongoing Process To achieve cost optimization righ t sizing must become an ongoing process within your organization It’s important to right size when you first consider moving to the cloud and calculate total cost of ownership but it’s equally Amazon Web Services – Right Sizing : Provisioning Instances to Match Workloads Page 2 important to right size periodically once you’re in the cloud to ensure ongoing costperformance optimization Why is it necessary to right size continually? Even if you right size workloads initially performance and capacity requirements can change over time which can result in underused or idle resources Additi onally new projects and workloads require additional cloud resources and overprovisioning is the likely outcome if there is no process in place to support right sizing and other cost optimization efforts You should r ight siz e your workloads at least once a month to control costs You can make ri ght sizing a smooth process by: • Having each team set up a right sizing schedule and then re port the savings to management • Monitoring costs closely using AWS cost and reporting tools such as Cost Explorer budgets and detailed billing reports in the Billi ng and Cost Management console • Enforcing tagging for all instances so that you can quickly identify attributes such as the instance own er application and environment (deve lopment/testing or production) • Understanding how to right size We first describe the types of instances that AWS offers and then discuss key considerations for right sizing your instances Overview of Amazon EC2 and Amazon RDS Instance Families Picking an Amazon Elastic Compute Cloud (Amazon EC2) instance for a given workload means finding the instance family that most closely matches the CPU and m emory needs of your workload Amazon EC2 provides a wide selection of instances which gives you lots of flexibility to right size your compute resources to match capacity needs at the lowest cost There are five families of EC2 instances with different op tions for CPU memory and network resources: Amazon Web Services – Right Sizing : Provisioning Instances to Match Workloads Page 3 • General purpose (includes T2 M3 and M4 instance types) – T2 instances are a very low cost option that provide a small amount of CPU resources that can be increased in short bursts when additional cycles are available They are well suited for lower throughput applications such as administrative applic ations or low traffic websites M3 and M4 instances provide a balance of CPU memory and network resources and are ideal for running small and midsize database s more memory intensive data processing tasks caching fleets and backend servers • Compute optimized (includes the C3 and C4 instance types ) – Have a higher ratio of virtual CPUs to memory than the other families and the lowest cost per virtual CPU of all the EC2 instance types Consider compute optimized instances first if you are running CPU bound scale out applications such as frontend fleets for high traffic websites on demand batch processing distributed analytics web servers video encoding a nd high performance scienc e and engineering applications • Memory optimized (includes the X1 R3 and R4 instance types ) – Designed for memory intensive applications these instances have the lowest cost per GiB of RAM of all EC2 instance types Use these instances if your application is memory bound • Storage optimized (includes the I3 and D2 instance types ) – Optimized to deliver tens of thousands of low latency random input/output ( I/O) operations per second (IOPS) to applications Storage optimize d instances are best for large deployments of NoSQL databases I3 instances are designed for I/O intensive workloads and equipped with super efficient NVMe SSD storage These instances can deliver up to 33 million IOPS in 4 KB blocks and up to 16 GB/secon d of sequential disk throughput D2 or dense storage instances are designed for workloads that require high sequential read and write access to very large data sets such as Hadoop distributed computing massively parallel processing data warehousing and logprocessing applications Amazon Web Services – Right Sizing : Provisioning Instances to Match Workloads Page 4 • Accelerated computing (includes the P2 G3 and F1 instance types ) – Provide access to hardware based compute accelerators such as graphics processing units (GPUs) or field programmable gate arrays (FPGAs) Accelerated computin g instances enable more parallelism for higher throughput on compute intensive workloads Amazon Relational Database Service (Amazon RDS) database instances are similar to Amazon EC2 instances in that there are d ifferent families to suit different workloads These database instance families are optimized for memory performance or I/O: • Standard performance (includes the M3 and M4 instance types ) – Designed for general purpose database workloads that don’t run man y inmemory functions This family has the most options for provisioning increased IOPS • Burstable performance (includes T2 instance types ) – For workloads that require burstable performance capacity • Memory optimized (includes the R3 and R4 instance types ) – Optimized for in memory functions and big data analysis Identifying Opportunities to Right Size The first step in right sizing is to monitor and analyze your current use of services to gain insight into instance performance and usage patterns To gather sufficient data observe performance over at least a two week period (ideally over a onemonth period ) to capture the workload and business peak The most common metrics that define instance performance are vCPU utilization memory utilization network utilization and ephemeral disk use In rare cases where instances are selected for reasons other than these metrics it is important for the technical owner to review the right sizing effort Tools for Right Sizing You can use t he following tools to evaluate costs and monitor and analyze instance usage for right sizing : Amazon Web Services – Right Sizing : Provisioning Instances to Match Workloads Page 5 • Amazon CloudWatch – Lets you observe CPU utilization network throughput and disk I/O and match the observed peak metrics to a new and cheaper instance type You can also regularly monitor Amazon EC2 Usage Reports which are updated several times a day and provide in depth usag e data for all your EC2 instances Typically this is feasible only for small environments given the time and effort required • AWS Cost Explorer – This free tool lets you dive de eper into your cost and usage data to identify trends pinpoint cost drivers and detect anomalies It includes Amazon EC2 Usage Reports which let you analyze the cost and usage of your EC2 ins tances over the last 13 months • AWS Trusted Advisor – Lets you inspect your AWS environment to identify idle and underutilized resources and provide s real time insight into service usage to help you improve system performance and reliability increase security and look for opportunities to save money • Third party monitoring tools such as CloudHealth Cloudability and CloudCheckr are also an option to automatically identify opportunities and suggest alternate instances These tool s have years of development effort and customer feedback points built into them They also provide additional cost management and optimization functionality Tips for Developing Your Own Right Sizing Tools You can also develop your own tools for monitoring and analyzing performance The following guidelines can help if you are considering this option : • Focus on instances that have run for at least half the time you’re looking at • Focus on instances with lower reserved in stance coverage • Exclude resources that have been switched off (reducing search effort) • Avoid conversions to older generation instances where possible • Apply a savings threshold below which right sizing is not worth considering • Make sure the following conditions are met before you switch to a new instance: Amazon Web Services – Right Sizing : Provisioning Instances to Match Workloads Page 6 o The vCPU of the new instance is equal to that of the old instance or the application’s observed vCPU is less than 80 % of the vCPU capacity of the new instance o The memory of th e new instance is equal to that of the old instance or the application’s obser ved memory peak is less than 80% of the memory capacity of the new instance Note: You can capture memory utilization metrics by using monitoring scripts that report these metric s to Amazon CloudWatch For more information see Monitoring Memory and Disk Metrics for Amazon EC2 Linux Instances o The network throughput of the new instance is equal to that of the old instance or the application ’s network peak is less than the network capacity of the new instance Note: Maximum NetworkIn and NetworkOut values are measured in bytes perminute Use the following formula to convert these metrics to megabit s per second: Maximum NetworkIn (or NetworkOut) x 8 (bytes to bits) /1024/1024 / 60 = Number of Mbps o If the ephemeral storage disk I/O is less than 3000 you can use Amazon Elastic Block Store (Amazon EBS) storage If not use instance families that have ephemeral storage For more information see Amazon EBS Volume Types Tips for Right Sizing This section offers tips t o help you right size your EC2 instances and RDS DB instances Right Siz e Using Performance Data Analyze performance data to right size your EC2 instances Identify idle instances and ones that are underutilized Key metrics to look for are CPU usage and m emory usage Identify instances with a maximum CPU usage and memory usage of less than 40 % over a four week period These are the instances that you will want to right size to reduce costs Amazon Web Services – Right Sizing : Provisioning Instances to Match Workloads Page 7 For compute optimized instances keep the following in mind: • Focus on very recent instance data (old data may not be actionable) • Focus on instances that have run for at least half the time you’re looking at • Ignore burstable instance families (T2 instance types ) because these families are designed to typically run at lo w CPU percentages for significant periods of time For storage optimized instances (I2 and D2 instance types) where the key feature is high data IOPS focus on IOPS to see whether instances are overprovisioned Keep the following in mind for storage optim ized instances: • Different size instances have different IOPS ratings so tailor your reports to each instance type Start with your most commonly used storage optimized instance type • Peak NetworkIn and NetworkOut values are measured in bytes per minute U se the following formula to convert these metrics to megabits per second: Maximum NetworkIn (or NetworkOut) x 8 (bytes to bits) /1024 /1024/ 60 = Number of Mbps • Take note of how I/O and CPU percentage metrics change during the day and whether there are peaks that need to be accommodated Right size against memory if you find that maximum memory utilization over a fourweek period is less than 40 % AWS provides sample scripts for monitoring memory and disk space utilization on your EC2 instances running Linux You can configure the scripts to report the metrics to Amazon CloudWatch When analyzing performance data for Amazon RDS DB instances focus on the following metrics to determine whether actual usage is lower than instance capacity: • Average CPU utilization • Maximum CPU utilization • Minimum available RAM • Average number of bytes read from disk per second Amazon Web Services – Right Sizing : Provisioning Instances to Match Workloads Page 8 • Average number of bytes written to disk per second Right Siz e Based on Usage N eeds As you monitor current performance identify the following usage needs and patterns so that you can take advantage of potential right sizing options: • Steady state – The load remains at a relatively constant level over time and you ca n accurately forecast the likely compute load For this usage pattern you might consider Reserved Instances which can provide significant savings • Variable but predictable – The load changes but on a predictable schedule Auto Scaling is well suited for applications that have stable demand patterns with hourly daily or weekly variability in usage You can use this feature to scale Amazon EC2 capacity up or down when you experience spiky traffic or pr edictable fluctuations in traffic • Dev/test/production – Development testing and production environments are typically used only during business hours and can be turned off during evenings weekends and holidays (You’ll need to rely on tagging to identify dev/test/production instances) • Temporary – For temporary workloads that have flexible start times and can be interrupted you can consider placing a bid for an Amazon EC2 Spot Instance instead of using an On Demand Instance Right Size by Turn ing Off Idle Instances The easiest way to reduce operational costs is to turn off instances that are no longer being used If you find instances that have been idle for more than two weeks it’s safe to stop or even terminate them Before terminating an insta nce that’s been idle for two weeks or less consider: • Who owns the instance? • What is the potential impact of terminating the instance? • How hard will it be to re create the instance if you need to restore it? Stopping an EC2 instance leaves any attached EBS volumes operational You will continue to be charged for these volumes until you delete them If you need the instance again you can easily turn it back on Terminating an instance Amazon Web Services – Right Sizing : Provisioning Instances to Match Workloads Page 9 however automatically deletes attached EBS volumes and requires effort to re provision should the instance be needed again If you decide to delete an EBS volume consider storing a snapshot of the volume so that it can be restored later if needed Another simple way to reduce costs is to stop instances used in development and production during hours when these instances are not in use and then start them again when their capacity is needed Assuming a 50 hour work week you can save 70 % by automatically stopping dev/test/production instances during nonbusiness hours Many to ols are available to automate scheduling including Amazon EC2 Scheduler AWS Lambda and AWS Data Pipeline as well as thirdparty tools s uch as CloudHealth and Skeddly Right Siz e by Selecting the Right Instance Family You can right size an instance by migrating to a different model within the same instance family or by migrating to another instance family When migrating within the same instance family you only need to consider vCPU memory network throughput and ephemeral storage A good general rule for EC2 instances is that if your maximum CPU and memory usage is less than 40 % over a four week period you can safely cut the machine in half For example if you were using a c48xlarge EC2 you could move to a c44xlarge which would save $190 every 10 days When migrating to a different instance family make sure the current instance type and the new instance type are compatible in terms of virtualization type network and platform: • Virtualization type – The instances must have the same Linux AMI virtualization type (PV AMI versus HVM) and platform (EC2 Classic versus EC2 VPC) For more information see Linux AMI Virtualization Types • Network – Some instances are not supported in EC2 Classic and must be launched in a virtual private cloud (VPC) For more information see Instance Types A vailable Only in a VPC Amazon Web Services – Right Sizing : Provisioning Instances to Match Workloads Page 10 • Platform – If your current instance type supports 32 bit AMIs make sure to select a new instance type that also supports 32 bit AMIs (not all EC2 instance types do) To check the platform of your instance go to the Instances scree n in the Amazon EC2 console and choose Show/Hide Columns Architecture When you resize an EC2 instance the resized instance usually has the same number of instance store volumes that you specified when you launched the original instance You cannot attac h instance store volumes to an instance after you’ve launched it so if you want to add instance store volumes you will need to migrate to a new instance type that contains the higher number of volumes Right Siz e Your Database Instances You can scale you r database instances by adjusting memory or compute power up or down as performance and capacity requirements change The following are some things to consider when scaling a database instance: • Storage and instance type are decoupled When you scale your database instance up or down your storage size remains the same and is not affected by the change • You can separately modify your Amazon RDS DB instance to increase the allocated storage space or improve the performance by changing the storage type (such a s General Purpose SSD to Provisioned IOPS SSD) • Before you scale make sure you have the correct licensing in place for commercial engines (SQL Server Oracle) especially if you Bring Your Own License (BYOL) • Determine when you want to apply the change Y ou have an option to apply it immediately or during the maintenance window specified for the instance Conclusion Right sizing is the most effective way to control cloud costs It involves continually analyzing instance performance and usage needs and patterns — and then turning off idle instances and right sizing instances that are either overprovisioned or poorly matc hed to the workload Because your resource needs are always changing right sizing must become an ongoing process to Amazon Web Services – Right Sizing : Provisioning Instances to Match Workloads Page 11 continually achieve cost optimization You can make right sizing a smooth process by establishing a right sizing schedule for each team en forcing tagging for all instances and taking full advantage of the powerful tools that AWS and others provide to simplify resource monitoring and analysis Contributors Contributors to this document include: • Amilcar Alfaro Sr Product Marketing Manager AWS • Erin Carlson Marketing Manager AWS • Keith Jarrett WW BD Lead – Cost Optimization AWS Business Development Document Revisions Date Description January 2020 Minor revisions March 2018 First publication
General
Running_Adobe_Experience_Manager_on_AWS
Running Adobe Experience Manager  on AWS First published July 2016 Updated November 25 202 0 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 Why use AEM on AWS? 1 Adobe Experien ce Manager Overview 3 AEM Platform Overview 3 Repositories 4 AEM Implementation on AWS 6 Self or Partner Managed Deployment 6 AEM Managed Services 6 Architecture Options 7 Reference Architecture 7 Reference Ar chitecture Components 7 AEM OpenCloud 11 Security 15 Compliance and GovCloud 17 Digital Asset Management 18 Automate d Deployment 18 Automated Operations 19 Additional AWS Services 20 Conclusion 20 Contributors 20 Further Reading 21 Document Revisions 21 Abstract This whitepaper outlines the benefits and strategy for hosting for Adobe Experience Manager ( AEM ) on Amazon Web Services ( AWS ) It discusses various migration strategies architecture choices and deployment strategies including a reference architecture for self hosting on AWS It also provides guidance for disaster recovery DevO ps and high compliance workloads su ch as government finance and healthcare This whitepaper is for technical leaders and business leaders responsible for deploying and managing AEM on AWS Amazon Web Services Running Adobe Experience Manager on AWS 1 Introduction Delivering a fast secure and seamless experience is essential i n today’s digital marketing environment The need to reach a broader audience across all devices is essential and a shorter time to market can be a differentiator Companies are turning to cloud based solutions to boost business agility harness new oppor tunities and gain cost efficiencies Adobe Experience Manager (AEM) is a comprehensive content management solution for building websites mobile apps and forms AEM makes it easy to manage your marketing content and assets Adopting AWS for running AEM presents many benefits such as increased business agility added flexibility and reduced costs This whitepaper provides technical guidance for running AEM on AWS With any deployment on AWS there are many different considerations and options so your approach might be different from the approach we walk through in this paper Lastly th is whitepaper concludes by discussing security and compliance architectural components connectivity and a strategy you can employ for migration Why use AEM on AWS? Hosting AEM on AWS offers some key benefits such as global capacity security reliability fault tolerance programmability and usability This section discusses several ways in which deploying AEM on AWS is different from deploying it to an onpremises infrastructure Flexible Capacity One of the benefits of using the AWS Cloud is the ability to scale up and down as needed When using AEM you have full freedom to scale all of your environments quickly and cost effectively giving you opportu nities to establish new development quality assurance (QA) and performance testing environments AEM is frequently used in scenarios that have unknown or significant variations in traffic volume The on demand nature of the AWS platform allows you to sca le your workloads to support your unique traffic peaks during key events such as holiday shopping seasons major sporting events and large sale events Amazon Web Services Running Adobe Experience Manager on AWS 2 Flexible capacity also streamlines upgrades and deployments AWS makes it very easy to set up a paral lel environment so you can migrate and test your application and content in a production like environment Performing the actual production upgrade itself can then be as simple as the change of a domain name system (DNS) entry Broad Set of Capabilities As a leading web content management system solution AEM is often used by customers as the foundation of their digital marketing platform Running AEM on AWS provides customers with the benefits of easily integrating third party solutions for auxiliary expe riences such as blogs and provid ing additional tools for supporting mobile delivery analytics and big data management You can integrate the open and extensible APIs of both AWS and AEM to create powerful new combinations for your firm Also AEM can be used to augment or create headless commerce architectures seamlessly With services like Amazon Simple Notification Service ( Amazon SNS) Amazon Simple Queue Service ( Amazon SQS) and AWS Lambda AEM functionality can easily be integrated with other third party functionalit ies in a decoupled fashion AWS can also provide a clean manageable and auditable approach to decoupled integration with backend systems such as Customer Relationship Management (CRM) and commerce systems Benefits of Cloud and Global Availability Organizations considering a transition to the cloud are often driven by their need to become more agile and innovative The traditional capital expenditure (Capex) funding model makes it difficult to quickly test new ideas The AWS Cloud model gives you the agility to quickly spin up new instances on AWS and the ability to try out new services without investing in large and upfront sunk costs ( that is costs that have already been incurred and can’t be recovered) AWS helps to lower customer costs through its pay forwhat youuse pricing model Also as of writing AWS Global Infrastructure spans 24 geographic regions around the world enabling customers to deploy on a global footprint quickly and easily Security and High Compliance Workloads Using AWS you will gain the control and confiden ce you need to safely run your business with the most flexible and secure cloud computing environment available today With AWS you can improve your ability to meet core security and compliance requirements with a comprehensive set of services and feature s The AWS Compliance Amazon Web Services Running Adobe Experience Manager on AWS 3 Program s will help you understand the robust controls in place at AWS to maintain security and compliance in the cloud Compliance certifications and attestations are assesse d by a third party independent auditor Running AEM on AWS provides customers with the benefits of leveraging the compliance and security capabilities of AWS along with the ability to monitor and audit access to AEM using AWS Security Identity and Compliance services AWS also offers the GovCloud (US) Regions which are designed to host sensitive data regulate workloads and address the most str ingent US government security and compliance requirements Adobe Experience Manager Overview This section highlights some of the key technical elements for AEM and offers some best practice recommendations This whitepaper focuses on AEM 65 (released April 2019) AEM Platform Overview A standard AEM architecture consists of three environments: author publish and dispatcher Each of these environments consists of one or more instances Figure 1 – Sample AEM Architecture The author environment is used for crea ting and managing the content and layout of an AEM experience It provides functionality for reviewing and approving content updates and publishing approved versions of content to the publish environment Amazon Web Services Running Adobe Experience Manager on AWS 4 The publish environment delivers the experience to the intended audience It renders the actual pages with an ability to personalize the experience based on audience characteristics or targeted messaging The author and publish instances are Java web applications that have identical installed software T hey are differentiated by configuration only The dispatcher environment is a caching and/or load balancing tool that helps realize a fast and dynamic web authoring environment For caching the dispatcher works as part of an HTTP server such as Apache HTTP Server with the aim of storing (or caching) as much of the static website content as possible and accessing the website's publisher layout engine as infrequently as possible For cachin g the dispatcher module uses the web server's ability to serve static content The dispatcher places the cached documents in the document root of the web server Repositories Within AEM everything is content and stored in the underlying repository AEM’s repository is called CRX it imple ments the Content Repository API for Java ( JCR) and it is based on Apache Jackrabbit Oak Figure 2 – AEM Storage Options The Oak storage layer provides an abstraction layer for the actual storage of the content MicroKernels act as persistence managers in AEM There are two primary storage implementations available in AEM 6: Tar Storage and MongoDB Storage The Tar storage uses tar files It stores the content as various types of records within larger segments Journals are use d to track the latest state of the repository The MongoDB Amazon Web Services Running Adobe Experience Manager on AWS 5 storage leverages MongoDB for sharding and clustering The repository tree is kept in one MongoDB database where each node is a separate document At a high level Tar MicroKernel (TarMK) is used f or performance and MongoDB is used for scalability Publish instances are always TarMK Multiple publish instances with each instance running its own TarMK are referred to as TarMK farm This is the default deployment for publish environments Author instances can either use TarMK for a single author instance or MongoDB when horizontal scaling is required For TarMK author instance deployments a cold standby TarMK instance can be configured in another availability zone to provide backup in case the primary author instance fails although the failover is not automatic TarMK is the default persistence system in AEM for both author and publish configurations Although AEM can be configured to use a different persistence system (such as MongoDB ) TarMK is performance optimized for typical JCR use cases and is very fast TarMK uses an industry standard data format that can be quickly and easily backed up providing high performance and reliable data storage with minimal operational overhead and lower total cost of ownership (TCO) MongoDB is recommended for AEM author deployments when there are more than 1000 unique users per day 100 concurrent users or high volumes of page edits (For details r efer to When to use Mongo DB ) MongoDB provides high availability redundancy and automated failovers for author instances although performance can be lower than TarMK A minimum deployment with MongoDB typically involves a MongoDB replica consisting of one primary node and two secondary nodes with each node running in its separate availability zone In AEM binary data can be stored independently from the content nodes The binary data is stored in a data store whereas content nodes are stored in a node store You can use Amazon Simple Storage Service (Amazon S3) as a shared datasto re between publish and author instances to store binary files This approach makes the cluster high performant For details see How to configure S3 as a datastore Amazon Web Services Running Adobe Experience Manager on AWS 6 AEM Implementation on AWS This section outline s the following two deployment options and the key design elements to consider for deploying AEM on AWS • Self or partner managed deployment • AEM Managed Services by Adobe Self or Partner Managed Deployment In a self managed deployment the organization itself is responsible for the deployment and maintenance of AEM and the underlying AWS infrastructure In partner managed deployment the organizat ion engages with a partner from the AWS Partner Network (APN) for the deployment and maintenance of AEM and the underlying AWS infrastructure AEM customizations in both models can be done by the organizatio n or the partner For organizations who cannot manage their own deployment of AEM on AWS (either because they do not have the resources or because they are not comfortable) there are several APN partners that specialize in providing managed hosting deploy ments of AEM on AWS These companies take care of all aspects of deploying securing patching and maintaining AEM Some partners also provide design services and custom development for AEM You can use AWS Partner Finder to find and compare providers that specialize in Adobe products on AWS AEM Managed Services AEM Managed Services by Adobe enables customers to launch faster by deploying on the AWS cloud and also by leaning on best practices and support from Adobe Organizations and business users can engage customers in minimal time drive market share and focus on creating innovative marketing campaigns while reducing the burden on IT Cloud Manager part of the AEM Managed Services offering is a self service portal that further enables organizations to self manage AEM Manager in the cloud It includes a continuous integration and continuous delivery (CI/CD) pipeline that lets IT teams and implement ation partners speed up the delivery of customizations or updates without compromising performance or security Cloud Manager is only available for Adobe Managed Service customers Amazon Web Services Running Adobe Experience Manager on AWS 7 Architecture Options This section present s a reference architecture for run ning AEM on AWS along with various architectural options to consider when planning AEM on AWS deployment Alternately you can also consider adopt ing AEM OpenCloud an open source framework for running AEM on AWS Reference Architecture The following reference architecture is recommended for both self or partner managed deployment methods For reference architecture details see Hosting Adobe Experience Manager  on AWS Figure 3 –AEM on AWS Reference Architecture Reference Architecture Components Architecture Sizing For AEM the right instance type depends on th e usage scenario For AEM author and publish instances in the most common publishing scenario a solid mix of memory Amazon Web Services Running Adobe Experience Manager on AWS 8 CPU and I/O performance is necessary Therefore the Amazon EC2 General Purpose M5 family of instances are good candidate s for these environments depending upon sizing Amazon EC2 M5 Instan ces are the next generation of the Amazon EC2 General Purpose compute instances M5 instances offer a balance of compute memory and networking resources for a broad range of workloads Additionally M5d M5dn and M5ad instances have local storage offer ing up to 36TB of NVMe based SSDs AEM Dispatcher is installed on a web server (Apache httpd on Amazon EC2 instance ) and it is a key caching layer It provides caching load balancing and application security Therefore sizing memory and compute is im portant but optimization for I/O is critical for this tier Amazon Elastic Block Store ( Amazon EBS) I/O optimized volumes are recommended Each dispatcher instance is mapped to a publish instance in a one toone fashion in each availability zone For all of these instances Amazon EBS optimization is important EBS volumes on which AEM is installed should use either General Purpose SSD (GP2) volumes or provisioned Input/ Output operations Per Second (IOPS) volumes This configuration provides a specific level of performance and lower latency for operations Adobe recommends Intel Xeon or AMD Opteron CPU with at least 4 cores and 16 GB of RAM for AEM environments This translates to Amazon EC2 M5XL instance type Typically you can start with Amazon EC2 M52XL instance type and then adjust based on your workload needs For guidance on selecting the right instance r efer to the Adobe hardware sizing guide The specific sizing for the number of servers you need depends on your AEM use case (for example experience management or digital asset management) and the level of caching that should be applied At minimum you need five total servers for a high availability configuration utilizing two Availability Zones This architecture place s a dispatcher publi sher pair in each of the two Availability Zones and a single author node in one Availability Zone (fronting each of the publish instances with a dispatcher instance) For guidelines for calculating the number of servers required refer to the Adobe support site Load Balancing In an AEM setup Elastic Load Balancing is configured to balance traffic to the dispatchers By default a load balancer distributes incoming requests evenly across its enabled Availability Zo nes (AZs) To ensure that a load balancer distributes incoming Amazon Web Services Running Adobe Experience Manager on AWS 9 requests evenly across all back end instances (regardless of the Availability Zone that they are in ) enable cross zone load balancing For authenticated AEM experiences authentication is main tained by a login token When a user logs in the token information is stored under the tokens node of the corresponding user node in the repository The value of the token ( that is the session ID) is also stored in the browser as a cookie named login token In this case the load balancer should be configured for sticky sessions routing requests with the login token cookie to the same instance AEM can be configured to recognize the authentication cookie across all publish instance s However it also req uires that all relevant user session information ( for example a shopping cart) is available across all publish instances Elastic Load Balancing can be used in front of the dispatchers to provide a Single CNAME URL for the application The load balancer in conjunction with AWS Certificate Manager can be used to provide an HTTPS access and to offload SSL By using the load balancer you can further secure your website deployment by moving the publisher instances into a private subnet allowing access from only the load balancer The load balancer can also translate the port access from port 80 to the default publish port 4503 High Availability For a highly available AEM architecture the architecture should be set up to leverage AWS strengths Configure e ach instance in the AEM cluster for Amazon EC2 Auto Recovery Additionally when the clu ster is built in conjunction with a load balancer you can use AWS Auto Scaling to automatically provision nodes across multiple Availability Zones We recommend that you provision nodes across multiple Availability Zones for high availability and use multiple AWS Regions to address global deployment considerations as needed In a multi Region deployment you can set up Amazon Route 53 to perform DNS failover based on health checks Scaling A simple way to accomplish scaling is to create separate Amazon Machine Images (AMIs) for the publish instance dispatcher instance (mapped to publish) and dispatcher instance (mapped to author if in use) Three separate launch configurations can be created using these AMIs and included in separate Auto Scaling groups Newly launched dispatcher instances require a corresponding publish instance and need to author instances to receive future invalidation calls AWS Lambda can provide scaling logic in response to scale up/down events from Auto Scaling groups The Amazon Web Services Running Adobe Experience Manager on AWS 10 scaling logic consists of pairing/unpair ing the newly launched dispatcher instance to an available publish instance (or the other way around ) updat ing the replication agent (reverse replication if applicable) between the newly launched publish instance and author instance and updat ing AEM content health check alarms Each d ispatcher instance is mapped to a publish instance in a one toone fashion in separate availability zone s For faster startup and synchronization you can place the AEM installation on a separate Amazon EBS volume By taking frequent snapshots of the volume and attaching those snapshots to the newly launched instances the need to repl icate large amounts of data from the author can be cut down In the startup process the publish instance can then trigger author —publish replication to fully ensure the latest content Content Delivery AEM can use a content delivery network (CDN) such as Amazon CloudFront as a caching layer in addition to the standard AEM dispatcher When you use a CDN you need to consider how content is invalidated and refreshed in the CDN when content is updated Explicit configuration regarding how long particular resources are held in the CloudFront cache along with expiration and cache control headers sent by dispatcher can help in controlling the CDN cache Cache control headers can be controlled by using the mod_expires Apache Module For API based invalidation associated with content replication o ne approach is to build a custom invalidation workflow and set up an AEM Replication Agent that will use your own ContentBuilder and TransportHandler to invalidate the Amazon CloudFront cache using API For more details r efer to Using Dispatcher with a CDN Dynamic Content The dispatcher is the caching layer with the AEM product It allows for defining caching rules at the web server layer To realize the full benefit of the dispatcher pages should be fully cacheable Any element that isn’t cacheable will “break” the cache functionality To incorporate dynamic elements in a static page the recommended approach is to use client side JavaScript Edge Side Includes (ESI s) or web server level Server Side Includes (SSI s) Within an AWS environment ESIs can be configured using a solution such as Varnish replacing the dispatcher However using such configuration may not be supported by Adobe Amazon Web Services Running Adobe Experience Manager on AWS 11 Amazon S3 Data Store Binary data can be stored independently from the content nodes in AEM When deploying on AWS the binary data store can be Amazon S3 simplifying management and backups Also the binary data store can then be shared across author instances and even betwee n author and publish instances reducing overall storage and data transfer requirements Refer to Amazon S3 Dat a Store documentation by Adobe to learn how to configure S3 for AEM AEM OpenCloud AEM OpenCloud is an open source platform for running AEM on AWS It provides an outofthebox solution for provisioning a highly available AEM architecture which implements auto scaling auto recovery chaos testing CDN multi level backup blue green deployment repository upgrade security and monitoring capab ilities by leveraging a multitude of AWS services AEM OpenCloud code base is open source and available on GitHub with an Apache 2 license The code base is maintained by Shine Solutions Group an APN Partner You are free to use AEM OpenCloud on your own or engage with the Shine Solution s Group for custom use cases and implementation support AEM OpenCloud supports multiple AEM versions from 62 to 65 using Amazon Linux 2 or RHEL7 operating system with two architecture options: fullset and consolidated This platform can also be built and run in multiple AWS Regions It is highly configurable and provides a number of customization points where users can provision various other software into their AEM environment provisioning automation AEM OpenCloud is available through the AEM OpenCloud on AWS Quick Start an architecture based on AWS best practices you easily launch in a few clicks AEM OpenCloud FullSet Architecture A fullset architecture is a full featured environment suitable for production and staging environments It includes AEM Publish Author Dispatcher and Publish Dispatcher EC2 instances within Auto Scaling groups which (combined with an Orche strator application ) provide the capability to manage AEM capacity as the instances scale out and scale in corresponding to the load on the Dispatcher instances Orchestrator application manages AEM replication and flush agents as instances are created and terminated This architecture also includes chaos testing capability by using Netflix Chaos Monkey which can be configured to randomly terminate either one of those instances within the Amazon Web Services Running Adobe Experience Manager on AWS 12 autoscaling groups or allow the architecture to live in production continuously verifying that AEM OpenCloud can automatically recover from failure AEM Author Primary and Author Standby are managed separately where a failure on Author Primary instance can be mitiga ted by promoting an Author Standby to become the new Author Primary as soon as possible while a new environment is being built in parallel and will take over as the new environment replacing the one which lost its Author Primary Fullset architecture us es Amazon CloudFront as the CDN sitting in front of AEM Publish Dispatcher load balancer providing global distribution of AEM cached content Fullset offers three types of content backup mechanisms: AEM package backup live AEM repository EBS snapshots (taken when all AEM instances are up and running ) and offline AEM repository EBS snapshots (taken when AEM Author and Publish are stopped ) You can u se any of these backups for blue green deployment providing the capability to replicate a complete environment or to restore an environment from any point of time Figure 4 – AEM OpenCloud Full Set Architecture Amazon Web Services Running Adobe Experience Manager on AWS 13 On the security front this architecture provides a minimal attack surface with one public entry point to either Amazon CloudFront distribution or an AEM Publish Dispatcher load balancer whereas the other entry point is for AEM Author Dispatcher load balancer AEM OpenCl oud supports encryption using AWS Key Management Service (AWS KMS ) keys across its AWS resources The f ullset architecture also includes a n Amazon CloudWatch Monitoring Dashboard which visualizes the capacity of AEM Author Dispatcher Author Primary Author Standby Publish and Publish Dispatcher along with their CPU memory and disk consumptions Amazon CloudWatch Alarms are also configured across the most important AWS resources allow ing notification mechanism via an SNS topic Consolidated Architecture A consolidated architecture is a cut down environment where an AEM Author Primary an AEM Publish and an AEM Dispatcher are all running on a single Amazon EC2 instance This architecture is a low cost alternative suitable for development and testing environments This architecture also offers those three types of backup just like fullset architecture where the backup AEM package and EBS snapshots are interchangeable between consolidated and fullset environments This option is useful for example when you want to restore production backup from a fullset environment to multiple development environments running consolidated architecture Another example is if you want ed to upgrade an AEM repository to a newer version in a development environment which is then pushed through to testing staging and eventua lly production Amazon Web Services Running Adobe Experience Manager on AWS 14 Figure 5 – AEM OpenCloud Consolidated Architecture Environment Management To manage multiple environments with a mixture of fullset and consolidated architectures AEM OpenCloud has a Stack Manager that handles the command executions within AEM instances via AWS Systems Manager These commands include taking backups checking environment readiness running the AEM security checklist enabling and disabling CRX DE and SAML deploying multiple AEM packages configured in a descriptor flushing AEM Dispatcher cache and promoting the AEM Author Standby instance to Primary Other than the Stack Manager there is also AEM OpenCloud Manager which currently provides Jenkins pipelines for creating and terminating AEM fullset and consolidated architectures baking AEM Amazon Machine Images (AMIs) executing operational tasks via Stack Manager and upgrading an AEM repository between versions (for example from AEM 62 to 64 or from AEM 64 to 65 ) Amazon Web Services Running Adobe Experience Manager on AWS 15 Figure 6 – AEM OpenCloud Stack Manager Security The security of the A EM hosting environment can be broken down into two areas: application security and infrastructure security A crucial first step for application security is to follow the Security Checklist for AEM and the Dispatcher Security Checklist These checklists cov er various parts of security considerations from running AEM in production mode to using mod_rewrite and mod_security modules from Apache to prevent Distributed Denial of Service ( DDoS) attacks and cross site scripting ( XSS) attacks From an infrastructure level AWS provides several security services to secure your environment These services are grouped into five main categories – network security; data protection; access control; d etection audit monitoring and logging ; and incident response Networ k Security One of the core components of network security is Amazon V irtual Private Cloud (Amazon VPC) This service provides multiple layers of network security for your application such as public and private subnets security groups and network access Amazon Web Services Running Adobe Experience Manager on AWS 16 control lists for subnet s Also VPC endpoints for S3 enable you to privately connect your VPC to Amazon S3 Amazon CloudFront can offload direct access to your backend infrastructure and using the Web Application Firewall (WAF) provided by the AWS WAF service you can apply rules to prevent the application from getting compromised by scripted attacks The same r ules that are encoded in Apache mod_security on the dispatcher can be moved or replicated in AWS WAF Since AWS WAF integrates with Amazon CloudFront CDN this enables earlier detection minimizing overall traffic and impact AWS WAF provides centralized c ontrol automated administration and real time metrics Additionally AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS AWS Shield pr ovides always on detection and automatic inline mitigations that minimize application downtime and latency so there is no need to engage AWS Support to benefit from DDoS protection There are two tiers of AWS Shield : Standard and Advanced All AWS custome rs benefit from the automatic protections of AWS Shield Standard at no additional charge Data Protection Organizations should encrypt data at rest and in transit AEM provides SSL wizard to easily configure SSL certificates AWS data protection services provide encryption and key management and threat detection that continuously monitors and protects your AWS infrastructure For exam ple AWS Certificate Manager can p rovision manage and deploy public and private SSL/TLS certificates ; AWS KMS can help with Key storage and management ; and Amazon Macie can d iscover and protect your sensitive data at scale Access Control AWS Identity & Access Management (IAM) helps securely manage access to AWS services and resources In addition AWS provides identity services to connect your on prem directory service or use AWS Directory Service as a managed Microsoft Active Directory to provide access to AEM infrastructure as needed within your organization Detection Audit Monitoring and Logging Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts and workloads With AWS Security Hub you have a single place that aggregates organizes and prioritizes your security alerts or findings from multiple AWS services Amazon Web Services Running Adobe Experience Manager on AWS 17 such as Amazon GuardDuty Amazon Inspector and Amazon Macie as well as f rom APN Partner solutions AWS also provides audit tools such as AWS Trusted Advisor which inspects your AWS environment and makes recommendations for cost saving improving system performance and reliability and security Amazon Inspector automatically assesses applications for vulnerabilities or deviations from best practices After performing an assessment Amazon Insp ector produces a detailed report with prioritized steps for remediation This can support system management and gives security professionals the necessary visibility into vulnerabilities that need to be fixed In addition to Amazon Inspector you can use other third party products such as Burp Suite or Qualys SSL Test (for certificate validation Finally havi ng an audit log of all API actions and configuration changes can be useful in determining what changed and who changed it AWS CloudTrail and AWS Config provide you with the capability to capture extensive audit logs We recommend that you enable these services in your hosting environment Incident Response AWS provides services such as AWS Lambda and AWS Config Rules which can evaluate whether your AWS resources comply with your desir ed settings and set them back into compliance or notify you Amazon Detective is another service that simplifies the process of investigating security findings and identifying the root cause Amazon Detecti ve analyzes events from multiple data sources such as VPC Flow Logs AWS CloudTrail logs and Amazon GuardDuty findings and automatically creates a graph model that provides you with a unified interactive view of your resources users and the interactions between them over time Compliance and GovCloud The AWS GovCloud (US) gives government customers and their partners the flexibility to architect secure cloud solutions that comply with many compliance programs (FedRAMP High FISMA DoD SRG ITAR and CJIS to name a few) AWS GovCloud (USEast) and (US West) Regions are operated by employees who are US citizens on US soil AWS GovCloud (US) is only accessible to US entities and root account holders who pass a screening process Service mapping t o compliance programs is detailed on the AWS Services in Scope by Compliance Program page Amazon Web Services Running Adobe Experience Manager on AWS 18 Digital Asset Management AEM includes a Digital Asset Management (DAM) solution called AEM Asset s AEM assets enables your enterprise users to manage and distribute digital assets such as images videos documents audio clips 3D files and rich media When planning for your AWS architecture you should evaluate the potential use of the AEM Assets solution as part of your planning With AEM Assets the number of large files usually increases and often involves resource intensive processes such as image transformations and renditions Various architecture best practices should be considered depending on the scenario and they are described in detail in Best Practices for Assets Automated Deployment AWS provides API access to all AWS servi ces and Adobe does this for AEM as well Many of the various commands to deploy code or content or to create backups can be invoked through an HTTP service interface This allows for a very clean organization of the continuous integration and deployment process with the use of Jenkins as a central hub invoking AEM functionality through CURL or similar commands Jenkins can support manual scheduled and triggered deployments and can be the central point for your AEM on AWS deployment If necessary you can enable additional automation using Jenkins with AWS CodeBuild and AWS CodeDeploy enabling the creation of a complete environment from the Jenkins console Refer to Set up a Jenkins Build Server on AWS to set up Jenkins Amazon Web Services Running Adobe Experience Manager on AWS 19 Figure 7 – Example CI Setup for an AEM Jenkins Architecture Automated Operations One of the key benefits of running AEM on AWS is the str eamlined AEM Operations process To provision instances AWS CloudFormation or AWS OpsWorks can be leveraged to fully automate the deployment process fro m setting up the architecture to provisioning the necessary instances Using the AWS CloudFormation embedded stacks functionality scripts can be organized to support the different architectures outlined in the earlier sections Also AEM OpenCloud manager provides automated operations functionality out of the box with little effort When using AEM’s Tar Storage repository content is stored on the file system To create an AEM backup you must create a file system snapshot You can make a file system snapshot on AWS through Amazon Data Lifecycle Manager Alternately you can create a centralized b ack up plan using AWS Backup You should use Amazon Data Lifecycle Manager when you want to automate the creation retention and deletion of EBS snapsh ots You should use AWS Backup to manage and monitor backups across the AWS services you use including EBS volumes from a single place Lastly review the best practices and checks (such as log file monitoring AEM performance monitoring and Replication Agent monitoring ) outlined in the Monitoring and Maintaining AEM guide to ensure smooth operations of your AEM environment Amazon Web Services Running Adobe Experience Manager on AWS 20 Additional AWS Services You can use additional services and capabilities from both AWS and the AEM platform to add further value to your AEM deployment on AWS With AEM you can integrate with a variety of thirdparty services outofthe box as well as Amazon SNS for mobile notifications relating to changes to the AEM environment AEM offers tools to manage targeting within experiences delivered through the solution Adobe also has complementary products (which integrate well with AEM ) that further personalize and target the experience for customers Combined with AWS services such as Amazon Personalize Amazon Kinesis and AWS Lambda you can create a powerful targeting engine to deliver onetoone personalization Conclusion This paper presented the business and technology drivers for running AEM on AWS along with the strategies and considerations Running AEM on AWS provides a secure and scalable foundation for delivering great digital ex periences for customers As you prepare for your AEM migration to AWS we recommend that you consider the guidance outlined in this document Contributors Contributors to this document include : • Anuj Ratra Sr Solutions Architect Amazon Web Services • Cliffano Subagio Principal Engineer Shine Solutions Group • Michael Bloch Senior DevOps Engineer Shine Solutions Group • Matthew Holloway Manager Solutions Architects Amazon Web Services • Pawan Agnihotri Sr Mgr Solution Architecture Amazon Web Services • Martin Jacobs GVP Technology Razorfish Amazon Web Services Running Adobe Experience Manager on AWS 21 Further Reading For additional information see: • Hosting Adobe Experience Manager on AWS Reference Architecture Document Revisions Date Description November 2020 Updated Reference Architecture for AEM 65 Added AEM OpenC loud framework as an alternative option July 2016 First publication
General
Streaming_Data_Solutions_on_AWS_with_Amazon_Kinesis
Streaming Data Solutions on AWS First Published September 13 2017 Updated September 1 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2021 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 Real time and near realtime application scenarios 1 Difference between batch and stream processing 2 Stream processing challenges 2 Streaming data solutions: examples 2 Scenario 1: Internet offering based on location 3 Processing streams of data with AWS Lambda 5 Summary 6 Scenar io 2: Near realtime data for security teams 6 Amazon Kinesis Data Firehose 7 Summary 12 Scenario 3: Preparing clic kstream data for data insights processes 13 AWS Glue and AWS Glue streaming 14 Amazon DynamoDB 15 Amazon SageMaker and Amazon SageMaker service endpoints 16 Inferring data insights in real time 16 Summary 17 Scenario 4: Device sensors realtime anomaly detection and notifications 17 Amazon Kinesis Data Analytics 19 Summary 21 Scenario 5: Real time tele metry data monitoring with Apache Kafka 22 Amazon Managed Streaming for Apache Kafka (Amazon MSK) 23 Amazon EMR with Spark Streaming 25 Summary 27 Conclusion 28 Contributors 28 Document versions 28 Abstract Data engineers data analysts and big data developers are looking to process and analyze their data in realtime so their companies can learn about what their customers applications and products are doing right now and react promptly This whitepaper describes how services such as Amazon Kinesis Data St reams Amazon Kinesis Data Firehose Amazon EMR Amazon Kinesis Data Analytics Amazon Managed Streaming for Apache Kafka (Amazon MSK) and other services can be used to implement real time applications and provides common design patterns using these services Amazon Web Services Streaming Data Solutions on AWS 1 Introduction Businesses today receive data at massive scale and speed due to the explosive growth of data sources that continuously generate streams of data Whether it is log data from application servers clickstream data from websites and mobile apps o r telemetry data from Internet of Things (IoT) devices it all contains information that can help you learn about what your customers applications and products are doing right now Having the ability to process and analyze this data in real time is esse ntial to do things such as continuously monitor your applications to ensure high service uptime and personalize promotional offers and product recommendations Real time and near real time processing can also make other common use cases such as website an alytics and machine learning more accurate and actionable by making data available to these applications in seconds or m inutes instead of hours or days Real time and nearrealtime application scenarios You can use streaming data services for real time and near realtime applications such as application monitoring fraud detection and live leaderboards Realtime use cases require millisecond end toend latencies – from ingestion to processing all the way to emitting the results to target data stores a nd other systems For example Netflix uses Amazon Kinesis Data Streams to monitor the communications between all its applications so it can detect and fix issues quickly ensuring high service u ptime and availability to its customers While the most commonly applicable use case is application performance monitoring there are an increasing number of real time applications in ad tech gaming and IoT that fall under this category Common nearrealtime use cases include analytics on data stores for data science and machine learning (ML) You can use streaming data solutions to continuously load real time data into your data lakes You can then update ML models more frequently as new data becomes av ailable ensuring accuracy and reliability of the outputs For example Zillow uses Kinesis Data Streams to collect public record data and multiple listing service ( MLS) listings and then provide home buyers and sellers with the most up to date home value estimates in near realtime ZipRecruiter uses Amazon MSK for their event logging pipelines which are critical infrastructu re components that collect store and continually process over six billion events per day from the ZipRecruiter employment marketplace Amazon Web Services Streaming Data Solutions on AWS 2 Difference between batch and stream processing You need a different set of tools to collect prepare and process real time streaming data than those tools that you have traditionally used for batch analytics With traditional analytics you gather the data load it periodically into a database and an alyze it hours days or weeks later Analyzing real time data requires a different approach Stream processing applications process data continuously in real time even before it is stored Streaming data can come in at a blistering pace and data volumes can vary up and down at any time Stream data processing platforms have to be able to handle the speed and variability of incoming data and process it as it arrives often millions to hundreds of millions of events per hour Stream processing challenges Processing real time data as it arrives can enable you to make decisions much faster than is possible with traditional data analytics technologies However building and operating your own custom streaming data pipelines is complicated and resource intensiv e: • You have to build a system that can cost effectively collect prepare and transmit data coming simultaneously from thousands of data sources • You need to fine tune the storage and compute resources so that data is batched and transmitted efficiently for maximum throughput and low latency • You have to deploy and manage a fleet of servers to scale the system so you can handle the varying speeds of data you are going to throw at it Version upgrade is a complex and costly process After you have built this platform you have to monitor the system and recover from any server or network failures by catching up on data processing from the appropriate point in the stream without creating duplicate data You also need a dedicated team for infrastructure man agement All of this takes valuable time and money and at the end of the day most companies just never get there and must settle for the status quo and operate their business with information that is hours or days old Streaming data solutions : examples To better understand how organizations are doing real time data processing using AWS services this whitepaper uses four examples Each example review s a scenario and Amazon Web Services Streaming Data Solutions on AWS 3 discuss es in detail how AWS realtime data streaming services are used to solve the problem Scenario 1: Internet offering based on location Company InternetProvider provides internet services with a variety of bandwidth options to users across the world When a user signs up for internet company InternetProvider provides the user with different bandwidth options based on user’s geographic location Given these requirements company InternetProvider implemented an Amazon Kinesis Data Stream s to consume user details and location The user details and location are enrich ed with different bandwidth options prior to publishing back to the application AWS Lambda enables this real time enrichment Processing streams of data with AWS Lambda Amazon Kinesis Data Streams Amazon Kinesis Data Streams enables you to build custom real time applications using popular stream processing frameworks and load streaming data into many different data stores A Kinesis stream can be configured to continuously receive events from hundreds of thousands of data producers delivered from sources like website click streams IoT sensors social media feeds and application logs Within milliseconds data is available to be read and processed by your application When implementing a solution with Kinesis Data Streams you create custom data processing applications known as Kinesis Data Streams applications A typical Kinesis Data Streams application reads data from a Kinesis stream as data reco rds Data put into Kinesis Data Streams is ensured to be highly available and elastic and is available in milliseconds You can continuously add various types of data such as clickstreams application logs and social media to a Kinesis stream from hundreds of thousands of sources Within seconds the data will be available for your Kinesis Applications to read and process from the stream Amazon Web Services Streaming Data Solutions on AWS 4 Amazon Kinesis Data Stre ams is a fully managed streaming data service It manages the infrastructure storage networking and configuration needed to stream your data at the level of your data throughput Sending data to Amazon Kinesis Data Streams There are several ways to s end data to Kinesis Data S treams providing flexibility in the designs of your solutions • You can write code utilizing one of the AWS SDKs that are supported by multiple popular languages • You can use the Amazon Kinesis Agent a tool for sending data to Kinesis Data Streams The Amazon Kinesis Producer L ibrary (KPL) simplifies the producer application development by enabling developers to achieve high write throughput to on e or more Kinesis data streams The KPL is an easy to use highly configurable library that you install on your hosts It acts as an intermediary between your producer application code and the Kinesis Streams API actions For more information about the KPL and its ability to produce events synchronously and asynchronously with code examples see Writing to your Kinesis Data Stream Using the KPL There are two different operations in the Kinesis Streams API that add data to a stream: PutRecords and PutRecord The PutRecords operation sends multiple records to your stream per HTTP request while PutRecord submits one record per HTTP request To achieve higher throughput for most applications use PutRecords For more information about these APIs see Adding Data to a Stream The details for each API operation can be found in the Amazon Kinesis Streams API Reference Processing data in Amazon Kinesis Data Streams To read and process data from Kinesis streams you need to create a consumer application There are varied ways to create consumers for Kinesis Data Streams Some of these approaches include using Amazon Kinesis Data Analytics to analyze streaming data using KCL using AWS Lambda AWS Glue streaming ETL jobs and using the Kinesis Data Streams API d irectly Consumer applications for Kinesis Streams can be developed using the KCL which helps you consume and process data from Kinesis Streams The KCL takes care of Amazon Web Services Streaming Data Solutions on AWS 5 many of the complex tasks associated with distributed computing such as load balancing across multiple instances responding to instance failures checkpointing processed records and reacting to resharding The KCL enables you to focus on the writing record processing logic For more information on how to build your own KCL application see Using the Kinesis Client Library You can subscribe Lambda functions to automatically read batches of records off your Kinesis stream and process them if records are detected on the stream AWS Lambda periodically polls the stream (once p er second) for new records and when it detects new records it invokes the Lambda function passing the new records as parameters The Lambda function is only run when new records are detected You can map a Lambda function to a shared throughput consumer ( standard iterator) You can build a consumer that use s a feature called enhanced fan out when you require dedicated throughput that you do not want to contend with other consumers that are receiving data from the stream This feature enables consumers to receive records from a stream with throughput of up to two MB of data per second per shard For most cases using Kinesis Data Analytics KCL AWS Glue or AWS Lambda shou ld be used to process data from a stream However if you prefer you can create a consumer application from scratch using the Kinesis Data Streams API The Kinesis Data Streams API provides the GetShardIterator and GetRecords methods to retrieve data from a stream In this pull model you r code extracts data directly from the shards of the stream For more information about writing your own consumer application using the API see Developing Custom Consumers with Shared Throughput Using the AWS SDK for Java Details about the API can be found in the Amazon Kinesis Streams API Reference Processing streams of data with AWS Lambda AWS Lambda enables you to run code without provisioning or managing servers With Lambda you can run code for virtually any type of application or backend service with zero administration Just upload your code and Lambda takes care of everything required to run and scale your code with high availability You can set up your code to automatically trigger from other AWS ser vices or call it directly from any web or mobile app AWS Lambda integrates natively with Amazon Kinesis Data Streams The polling checkpointing and error handling complexities are abstracted when you use this native Amazon Web Services Streaming Data Solutions on AWS 6 integration This allows the Lambda function code to focus on business logic processing You can map a Lambda function to a shared throughput (standard iterator) or to a dedicated throughput consumer with enhanced fan out With a standard iterator Lambda polls each shard in your Kinesis s tream for records using HTTP protocol To minimize latency and maximize read throughput you can create a data stream consumer with enhanced fan out Stream consumers in this architecture get a dedicated connection to each shard without competing with othe r applications reading from the same stream Amazon Kinesis Data Streams pushes records to Lambda over HTTP/2 By default AWS Lambda invoke s your function as soon as records are available in the stream To buffer the records for batch scenarios you can i mplement a batch window for up to five minutes at the event source If your function returns an error Lambda retries the batch until processing succeeds or the data expires Summary Company InternetProvider leveraged Amazon Kinesis Data Stream s to stream user details and location The stream of record was consumed by AWS Lambda to enrich the data with bandwidth options stored in the function’s library After the enrichment AWS Lambda published the bandwidth options back to the application Amaz on Kinesis Data Stream s and AWS Lambda handled provisioning and management of servers enabling Company InternetProvider to focus more on business application development Scenario 2: Near realtime data for security teams Company ABC2Badge provides sensor s and badges for corporate or large scale events such as AWS re:Invent Users sign up for the event and receive unique badges that the sensors pick up across the campus As users pass by a sensor their anony mized information is recorded into a relational database In an upcoming event due to the high volume of attendees ABC2Badge has been requested by the event security team to gather data for the most concentrated areas of the campus every 15 minutes Thi s will give the security team enough time to react and disperse security personal proportionally to concentrated areas Given this new requirement from the security team and the inexperience of building a streaming Amazon Web Services Streaming Data Solutions on AWS 7 solution to process date in near realtime ABC2Badge is looking for a simple yet scalable and reliable solution Their current data warehouse solution is Amazon Redshift While reviewing the features of the Amazon Kinesis services they recognize d that Amazon Kinesis Data Firehose can receive a stream of data records batch the records based on buffer size and/or time interval and insert them into Amazon Redshift They created a Kinesis Data Firehose delivery stream and configured it so it would copy data to their Amazon Redshift tables every five minutes As part of this new solution they used the Amazon Kinesis Agent on their servers Every five minutes Kinesis Data Firehose load s data into Amazon Redshift where the business intelligence ( BI) team is enabled to perform its analysis and send the data to the security team every 15 minutes New solution using Amazon Kinesis Data Firehose Amazon Kinesis Data Firehose Amazon Kinesis Data Firehose is the easiest way to load streaming data into AWS It can capture transform and load streaming data into Amazon Kinesis Data Analytics Amazon Simple Storage Service (Amazon S3) Amazon Redshift Amazon Elasticsearch Service (Amazon ES) and Splunk Additionally Kinesis Data Firehose can load streaming data into any custom HTTP endpoint or HTTP endpoints owned by supported thirdparty service providers Kinesis Data Firehose enables near realtime analytics with existing business intelligence tools and dashboards that you’re already using today It’s a fully managed serverless service that automatically scales to match the throughput of your data and requires no ongoing administration Kinesis Data Firehose can batch compress and Amazon Web Services Streaming Data Solutions on AWS 8 encrypt the data before loading minimizing the amount of storage used at the destination and increasing security It can also transform the source data using AWS Lambda and deliver the transformed data to destin ations You configure your data producers to send data to K inesis Data Firehose which automatically delivers the data to the destination that you specify Sending data to a Firehose delivery stream To send data to your delivery stream there are several o ptions AWS offers SDKs for many popular programming languages each of which provides APIs for Amazon Kinesis Data Firehose AWS has a utility to help send data to your delivery stream Kinesis Data Firehose has been integrated with other AWS services to send data directly from those services into your delivery stream Using Amazon Kinesis agent Amazon Kinesis agent is a standalone software application that continuously monitors a set of log files for new data to be sent to the delivery stream The agent automat ically handles file rotation checkpointing retries upon failures and emits Amazon CloudWatch metrics for monitoring and troubleshooting of the deliv ery stream Additional configurations such data pre processing monitoring multiple file directories and writing to multiple delivery streams can be applied to the agent The agent can be installed on Linux or Window sbased servers such as web servers log servers and database servers Once the agent is installed you simply specify the log files it will monitor and the delivery stream it will send to The agent will durably and reliably send new data to the delivery stream Using API with AWS SDK and AWS services as a source The Kinesis Data Firehose API offers two operations for sending data to your delivery stream PutRecord sends one data record within one call PutRecordBatch can send multiple data records within one call and can achieve higher t hroughput per producer In each method you must specify the name of the delivery stream and the data record or array of data records when using this method For more information and sample code for the Kinesis Data Firehose API operations see Writing to a Firehose Delivery Stream Using the AWS SDK Kinesis Data Firehose also runs with Kinesis Data Streams CloudW atch Logs CloudW atch Events Amazon Simple Notification Service (Amazon SNS) Amazon API Amazon Web Services Streaming Data Solutions on AWS 9 Gateway and AWS IoT You can scalably and reliably sen d your streams of data logs events and IoT data directly into a K inesis Data Firehose destinati on Process ing data before delivery to destination In some scenarios you might want to transform or enhance your streaming data before it is delivered to its destination For example data producers might send unstructured text in each data record and yo u need to transform it to JSON before delivering it to Amazon ES Or you might want to convert the JSON data into a columnar file format such as Apach e Parquet or Apache ORC before storing the data in Amazon S3 Kinesis Data Firehose has built in data format conversion capability With this you can easily convert your streams of JSON data into Apache Parquet or Apache ORC file formats Data transformation flow To enable streaming data transformations Kinesis Data Firehose uses a Lambda function that you create to transform your data Kinesis Data Firehose buffers incoming data to a specified buffer size for the function and then invokes the specified Lambda function asynchronously The transformed data is sent from Lambda to K inesis Data Firehose and Kinesis Data Firehose delivers the data to the destination Data format conversion You can also enable K inesis Data Firehose data format conversion which will convert your stream of JSON data to Apache Parquet or Apache ORC This feature can only convert JSON to Apache Parquet or Apache ORC If you have data that is in CSV you can transform that data via a Lambda function to JSON and then apply th e data format conversion Data delivery As a near realtime delivery s tream Kinesis Data Firehose buffers incoming data After your delivery stream’s buffering thresholds have been reached your data is delivered to the destination you’ve configured Ther e are some differences in how K inesis Data Firehose delivers data to each destination which this paper reviews in the following sections Amazon Web Services Streaming Data Solutions on AWS 10 Amazon S3 Amazon S3 is object storage with a simple web service interface to store and retrieve any amount of data from anywhere on the web It’s designed to deliver 99999999999% durability and scale past trillions of object s worldwide Data delivery to Amazon S3 For data delivery to S3 K inesis Data Firehose concatenates multiple incoming records based on the buffering configuration of your delivery stream and then delivers them to Amazon S3 as an S3 object The freq uency of data delivery to S3 is determined by the S3 buffer size (1 MB to 128 MB) or buffer i nterval (60 seconds to 900 seconds) which ever comes first Data delivery to your S3 bucket might fail for various reasons For example the bucket might not exist anymore or the AWS Identity and Access Managem ent (IAM) role that Kinesis Data Firehose assumes might not have access to the bucket Under these conditions K inesis Data Firehose keeps retrying for up to 24 hours until the delivery succeeds The maximum data storage time of K inesis Data Firehose is 24 hours If data delivery fails for more than 24 hours your data is lost Amazon Redshift Amazon Redshift is a fast fully manag ed data warehouse that makes it simple and costeffective to analyze all your data using standard SQL and your existing BI tools It allows you to run complex analytic queries against petabytes of structured data using sophisticated query optimization col umnar storage on high performance local disks and massively parallel query running Data delivery to Amazon Redshift For data delivery to Amazon Redshift K inesis Data Firehose first delivers incoming data to your S3 bucket in the format described earlier K inesis Data Firehose then issues an Amazon Redshift COPY command to load the data from your S3 bucket to your Amazon Redshift cluster The frequency of data COPY operations from S3 to Amazon Redshift is determined by how fast your Amazon Redshift clust er can finish the COPY command For a n Amazon Redshift destination you can specify a retry duration (0 –7200 seconds) when creating a delivery stream to handle data delivery fai lures Kinesis Data Firehose retries for the specified time duration and skips that particular batch of S3 objects if unsuccessful The Amazon Web Services Streaming Data Solutions on AWS 11 skipped objects' information is delivered to your S3 bucket as a manifest file in the errors/ folder which you can use for manual backfill Following is an architec ture diagram of Kinesis Data Firehose to Amazon Redshift data flow Although this data flow is unique to Amazon Redshift Kinesis Data Firehose follows similar patterns for the other destination targets Data flow from Kinesis Data Firehose to Amazon Redshift Amazon E lasticsearch Service (Amazon ES) Amazon ES is a fully managed service that delivers the Elasticsearch easy touse APIs and real time capabilities along with the availability scalability and security required by production workloads Amazon ES makes it easy to deploy operate and scale Elasticsea rch for log analytics full text search and application monitoring Data delivery to Amazon E S For data delivery to Amazon E S Kinesis Data Firehose buffers incoming records based on the buffering configuration of your delivery stream and then generates an Elasticsearch bulk request to index multiple records to your Elasticsearch cluster The frequency of data delivery to Amazon E S is determined by the Elasticsearch buffer size (1 MB to 100 MB) and buffer interval (60 seconds to 900 seconds) values whic hever comes first For the Amazon E S destination you can specify a retry duration (0 –7200 seconds) when creating a delivery stream Kinesis Data Firehose retries for the specified time duration and then skips that particular index request The skipped d ocuments are delivered to your S3 bucket in the elasticsearch_failed/ folder which you can use for manual backfill Amazon Kinesis Data Firehose can rotate your Amazon ES index based on a time duration Depending on the rotation option you choose (NoRotation OneHour Amazon Web Services Streaming Da ta Solutions on AWS 12 OneDay OneWeek or OneMonth ) Kinesis Data Firehose appends a portion of the Coordinated Universal Time ( UTC) arrival timestamp to your specified index name Custom HTTP endpoint or supported thirdparty service provider Kinesis Data Firehose can send data either to Custom HTTP endpoints or supported thirdparty providers such as Datadog Dynatrace LogicMonitor MongoDB New Relic Splunk and Sumo Logic Data delivery to custom HTTP endpoints For K inesis Data Firehose to successfully deliver data to custom HTTP endpoints these endpoints must accept requests and send responses using certain K inesis Data Firehose request and response formats When delivering data to an HTTP endpoint owned by a supported third party ser vice provider you can use the integrated AWS Lambda service to create a function to transform the incoming record(s) to the format that matches the format the service provider's integration is expecting For data delivery frequency each service provider has a recommended buffer size Work with your service provider for more information on their recommended buffer size For data delivery failure handling Kinesis Data Firehose establishes a connection with the HTTP endpoint first by waiting for a response from the destination Kinesis Data Firehose continues to establish connection until the retry duration expires After that Kinesis Data Firehose considers it a data delivery failure and backs up the data to your S3 bucket Summary Kinesis Data Firehose can persist ently deliver your streaming data to a supported destination It’s a fully managed solution requiring little or no development For Company ABC2Badge using K inesis Data Firehose was a natural choice They were already using Amazon Redshift as their data warehouse solution Because their data sources continuously wr ote to transaction logs they were able to leverage the Amazon Kinesis Agent to stream that data without writing any additional code Now that company ABC2Badge has created a stream of sensor records and are receiving these records via K inesis Data Firehose they can use this as the basis for the security team use case Amazon Web Services Streaming Data Solutions on AWS 13 Scenario 3: Preparing clickstream data for data insights processes Fast Sneakers is a fashion boutique with a focus on trendy sneakers The price of any given pair of shoes can go up or down depending on inventory and trends such as what celebrity or sports star was spotted wearing brand name sneakers on TV last night It is importan t for Fast Sneakers to track and analyze those trends to maximize their revenue Fast Sneakers does not want to introduce additional overhead into the project with new infrastructure to maintain They want to be able to split the development to the appropr iate parties where the data engineers can focus on data transformation and their data scientists can work on their ML functionality independently To react quickly and automatically adjust prices according to demand Fast Sneakers streams significant eve nts (like click interest and purchasing data) transforming and augmenting the event data and feeding it to a ML model Their ML model is able to determine if a price adjustment is required This allows Fast Sneakers to automatically modify their pricing t o maximize profit on their products Fast Sneakers realtime price adjustments This architecture diagram shows the real time streaming solution Fast Sneakers created utilizing Kinesis Data Streams AWS Glue and DynamoDB Streams By taking advantage of these services they have a solution that is elastic and reliable without Amazon Web Services Streaming Data Solutions on AWS 14 needing to spend time on setting up and maintaining the supporting infrastructure They can spend their time on what brings value to their company by focusing on a streaming extract transform load (ETL) job and their machine learning model To better understand the architecture and technologies that are used in their workload the following are some details of the services used AWS Glue and AWS Glue streaming AWS Glue is a fully managed ETL service that you can use to catalog your data clean it enrich it and move it reliably between data stores With AWS Glue you can significantly reduce the cost complexity and t ime spent creating ETL jobs AWS Glue is serverless so there is no infrastructure to set up or manage You pay only for the resources consumed while your jobs are running Utilizing AWS Glue you can create a consumer application with a n AWS Glue streaming ETL job This enables you to utilize Apache Spark and other Spark based modules writing to consume and process your event data The next section of this document goes into more depth about this scenario AWS Glue Data Catalog The AWS Glue Data Catalog contains references to data that is used as sources and targets of your ETL jobs in AWS G lue The AWS Glue Data Catalog is an index to the location schema and runtime metrics of your data You can use information in the Data Catalog to create and monitor your ETL jobs Information in the Data Catalog is stored as metadata tables where each table specifies a single data store By setting up a crawler you can automatically assess numerous types of data stores including DynamoDB S3 and Java Database Connectivity ( JDBC ) connected stores extract metadata and schemas and then create table de finitions in the AWS Glue Data Catalog To work with Amazon Kinesis Data Streams in AWS Glue streaming ETL jobs it is best practice to define you r stream in a table in a n AWS Glue Data Catalog database You define a stream sourced table with the Kinesis s tream one of the many formats supported (CSV JSON ORC Parquet Avro or a customer format with Grok) You can manually enter a schema or you can leave this step to your AWS Glue job to determine during runtime of the job Amazon Web Services Streaming Data Solutions on AWS 15 AWS Glue streaming ETL job AWS Glue runs your ETL jobs in an Apache Spark serverless environment AWS Glue runs these jobs on virtual resources that it provisions and manages in its own service account In addition to being able to run Apache Spark based jobs AWS Glue provides an additiona l level of functionality on top of Spark with DynamicFrames DynamicFrames are distributed tables that support nested data such as structu res and arrays Each record is self describing designed for schema flexibility with semi structured data A record in a DynamicFrame contains both data and the schema describing the data Both Apache Spark DataFrames and DynamicFrames are supported in you r ETL scripts and you can convert them back and forth DynamicFrames provide a set of advanced transformations for data cleaning and ETL By using Spark Streaming in your AWS Glue Job you can create streaming ETL jobs that run continuously and consume d ata from streaming sources like Amazon Kinesis Data Streams Apache Kafka and Amazon MSK The jobs can clean merge and transform the data then load the results into stores including Amazon S3 Amazon DynamoDB or JDBC data stores AWS Glue processes an d writes out data in 100 second windows by default This allows data to be processed efficiently and permits aggregations to be performed on data arriving later than expected You can configure the window size by adjusting it to accommodate the speed in response vs the accuracy of your aggregation AWS Glue streaming jobs use checkpoints to track the data that has been read from the Kinesis Data Stream For a walkthrough on creating a streaming ETL job in AWS Glue you can refer to Adding Streaming ETL Jobs in AWS Glue Amazon DynamoDB Amazon DynamoDB is a key value and document database that delivers single digit millisecond pe rformance at any scale It's a fully managed multi Region multi active durable database with built in security backup and restore and in memory caching for internet scale applications DynamoDB can handle more than ten trillion requests per day and c an support peaks of more than 20 million requests per second Change data capture for DynamoDB streams A DynamoDB stream is an ordered flow of information about changes to items in a DynamoDB table When you enable a stream on a table DynamoDB captures information abo ut every modification to data items in the table DynamoDB runs on Amazon Web Services Streaming Data Solut ions on AWS 16 AWS Lambda so that you can create triggers —pieces of code that automatically respond to events in DynamoDB streams With triggers you can build applications that react to data modification s in DynamoDB tables When a stream is enabled on a table you can associate the stream Amazon Resource Name (ARN) with a Lambda function that you write Immediately after an item in the table is modified a new record appears in the table's stream AWS Lambda polls the stream and invokes your Lambda function synchronously when it detects new stream records Amazon SageMaker and Amazon SageMaker service endpoints Amazon SageMaker is a fully managed platform that enables developers and data scientists with the ability to build train and deploy ML models quickly and at any scale SageMaker includes modules that can be u sed together or independently to build train and deploy your ML models With Amazon SageMaker service e ndpoints you can create managed hosted endpoint for real time inference with a deployed model that you developed within or outside of Amazon SageMaker By utilizing the AWS SDK you can invoke a SageMaker endpoint passing content type information along with content and then receive real time predictions based on the data passed Th is enables you to keep the design and development of your ML models separated from your code that performs actions on the inferred results This enables your data scientists to focus on ML and the developers who are using the ML model to focus on how the y use it in their code For more information on how to invoke an endpoint in SageMaker see InvokeEnpoint in the Amazon SageMaker API Reference Infer ring data insights in real time The previous architecture diagram shows that Fast Sneakers’ existing web application added a Kinesis Data Stream containing click stream events which provides traffic and event data from the website The product catalog which contains information such as categorization product attributes and pricing and the order table which has data such as items ordered billing shipping and so on are s eparate DynamoDB tables The data stream source and the appropriate DynamoDB tables have their metadata and schemas defined in the AWS Glue Data Catalog to be used by the AWS Glue streaming ETL job Amazon Web Services Streaming Data Solutions on AWS 17 By utilizing Apache Spark Spark streaming and DynamicFr ames in their AWS Glue streaming ETL job Fast Sneakers is able to extract data from either data stream and transform it merging data from the product and order tables With the hydrated data from the transformation the datasets to get inference results from are submitted to a DynamoDB table The DynamoDB Stream for the table triggers a Lambda function for each new record written The Lambda function submits the previously transformed records to a SageMaker Endpoint with the AWS SDK to infer what if any price adjustments are necessary for a product If the ML model identifies an adjustment to the price is required the Lambda function write s the price change to the product in the catalog DynamoDB table Summary Amazon Kinesis Data Streams makes it easy to collect process and analyze real time streaming data so you can get timely insights and react quickly to new information Combined with the AWS Glue serverless data integration service you can create real time event streaming application s that prepare and combine data for ML Because both Kinesis Data Streams and AWS Glue services are fully managed AWS takes away the undifferentiated heavy lifting of managing infrastructure for your big data platform lettin g you focus on generating data insights based on your data Fast Sneakers can utilize real time event processing and ML to enable their website to make fully automated real time price adjustments to maximize their product stock This brings the most valu e to their business while avoiding the need to create and maintain a big data platform Scenario 4: Device sensors realtime anomaly detection and notifications Company ABC4Logistics transports highly flammable petroleum products such as gasoline liquid propane ( LPG) and naphtha from the port to various cities There are hundreds of vehicles which have multiple sensors installed on them for monitoring things such as location engine temperature temperature inside the container driving speed parking location road conditions and so on One of the requirements ABC4Logistics has is to monitor the temperatures of the engine and the container in realtime and alert the driver and the fleet monitoring team in case of any anomaly To Amazon Web Services Streaming Data Solutions on AWS 18 detect such conditions and generate alerts in real time ABC4Logistics implemented the following architecture on AWS ABC4Logistics ’s device sensors real time anomaly detection and notifications architectu re Data from device sensors is ingested by AWS IoT Gateway where the AWS IoT rules engine will make the streaming data available in Amazon Kinesis Data Streams Using Amazon Kinesis Data Analytics ABC4Logistics can perform the real time analytics on streaming data in Kinesis Data Streams Using Kinesis Data Analytics ABC4Logistics can detect if temperature readings from the sensors deviate from the normal readings over a period of ten seconds and ingest the record onto another Kinesis Data Streams instance identifying the anomalous records Amazon Kinesis Data Streams then invokes AWS Lambda functions which can send the alerts to the driver and the fleet monitoring team through Amazon SNS Data in Kinesis Data Stream s is also pushed down to Amazon Kinesis Data Firehose Amazon Kinesis Data Firehose persist s this data in Amazon S3 allowing ABC4Logistics to perform batch or near real time analytics on senso r data ABC4Logistics uses Amazon Athena to query data in S3 and Amazon QuickSight for visualizations For longterm data retention the S3 Lifecycle policy is used to archive data to Amazon S3 Glacier Important components of this architecture are detail ed next Amazon Web Services Streaming Data Solutions on AWS 19 Amazon Kinesis Data Analytics Amazon Kinesis Data Analytics enables you to transform and analyze streaming data and respond to anomalies in real time It is a se rverless service on AWS which means Kinesis Data Analytics takes care of provisioning and elastically sca les the infrastructure to handle any data throughput T his takes away all the undifferentiated heavy lifting of setting up and managing the streaming infrastructure and enables you to spend more time on writing steaming applications With Amazon Kinesis Data Analytics you can interactively query streaming da ta using multiple options including S tandard SQL Apache Flink applications in Java Python and Scala and build Apache Beam applications using Java to analyze data streams These options provide you with flexibility of using a specific approach depending on the complexity level of streaming application and source/target support The following section discuss es Kinesis Data Analytics for Flink Applications option Amazon Kinesis Data Analytics for Apache Flink applications Apache Flink is a popular open source framework and distributed processing engine for stateful computations over unbounded and bounded da ta streams Apache Flink is designed to perform computations at in memory speed and at scale with support for exactly one semantics Apache Flink based applications help achieve low latency with high throughput in a fault tolerant manner With Amazon Kinesis Data Analytics for Apache Flink you can author and run code against streaming sources to perform time series analytics feed real time dashboards and create real time metrics without managing the complex distributed Apache Flink environment You can use the high level Flink programming features in the same way that you use them when hosting the Flink infrastructure yourself Kinesis Data Analytics for Apache Flink enables you to create applications in Java Scala Python or SQL to process and analy ze streaming data A typical Flink application reads the data from the input stream or data location or source transform s/filter s or joins data using operator s or function s and store s the data on output stream or data location or sink The following architecture diagram shows some of the supported sources and sinks for the Kinesis Data Analytics Flink application In addition to the pre bundled connectors for source/sink you can also bring in custom connectors to a variety of other source/sinks for Flink Applications on Kinesis Data Analytics Amazon Web Services Streaming Data Solutions on AWS 20 Apache Flink application on Kinesis Data Analytics for real time stream processing Developers can use their preferred IDE to develop Flink applications and deploy them on Kinesis Data Analytics from AWS Management Console or DevOps tools Amazon Kinesis Data Analytics Studio As part of Kinesis Data An alytics service Kinesis Data Analytics Studio is available for customers to interactively query data streams in real time and easily build and run stream processing applications using SQL Python and Scala Studio notebooks are powered by Apache Zeppelin Using Studio notebook you have the ability to develop your Flink Application code in a notebook environment view results of your code in real time and visualize it within your notebook You can create a Studio Notebook powered by Apache Zeppelin and Apache Flink with a single click from Kinesis Data Streams and Amazon MSK console or launch it from Kinesis Data Analytics Console Once you develop the code iteratively as p art of the Kinesis Data Analytics Studio y ou can deploy a notebook as a Kinesis data analytics application to run in streaming mode continuously reading data from your sources writing to your destinations maintaining longrunning application state an d scaling automatically based on the throughput of your source streams Earlier customers used Kinesis Data Analytics for SQL Applications for such interactive analytics of real time streaming data on AWS Amazon Web Services Streaming Data Solutions on AWS 21 Kinesis Data Analytics for SQL applications is still available but for new projects AWS recommend s that you use the new Kinesis Data Analytics Studio Kinesis Data Analytics Studio combines ease of use with advanced analytical capabilities which makes it possible to build sophisticated stream processing applic ations in minutes For making the Kinesis Data Analytics Flink application faulttolerant you can make use of checkpointing and snapshots as described in the Implemen ting Fault Tolerance in Kinesis Data Analytics for Apache Flink Kinesis Data Analytics Flink application s are useful for writing complex streaming analytics applications such as applications with exactly one semantics of data processing checkpoint ing capabilities and processing data from data sources such as Kinesis Data Streams Kinesis Data Firehose Amazon MSK Rabbit MQ and Apache Cassandra including Custom Connectors After processing streaming data in the Flink application you can persist data to various sinks or destinations such as Amazon Kinesis Data Streams Amazon Kinesis Data Firehose Amazon DynamoDB Amazon Elasticsearch Service Amazon Timestream Amazon S3 and so on The Kinesis Data Analytics Flink application also provide s sub second performance guarantees Apache Beam applications for Kinesis Data Analytics Apache Beam is a programming model for processing streaming data Apache Beam provides a portable API layer for building sophistica ted data parallel processing pipelines that may be run across a diversity of engines or runners such as Flink Spark Streaming Apache Samza and so on You can use the Apache Beam framework with your Kinesis data analytics application to process streaming data Kinesis data analytics applications that use Apache Beam use Apache Flink runner to run Beam pipelines Summary By making use of the AWS st reaming service s Amazon Kinesis Data Streams Amazon Kinesis Data Analytics and Amazon Kinesis Data Firehose ABC4Logistics : can detect anomalous patterns in temperature readings and notify the driver and the fleet management team in real time preventing major accidents such as complete vehicle breakdown or fire Amazon Web Services Streaming Data Solutions on AWS 22 Scenario 5: Real time telemetry data monitoring with Apache Kafka ABC1Cabs is an online cab booking services company All the cabs have IoT devices that gather telemetry data from the vehicles C urrently ABC1Cabs is running Apache Kafka clusters that are designed for real time event consumption gathering system health metrics activity tracking and feeding the data into Apache Spark Streaming platform b uilt on a Hadoop cluster on premises ABC1Cabs use Kibana dashboards for business metrics debugging alerting and creat ing other dashboards They are interested in Amazon MSK Amazon EMR with Spark Streaming and Amazon ES with Kibana dashboards Their requ irement is to reduce admin overhead of maintaining Apache Kafka and Hadoop clusters while using familiar open source software and APIs to orchestrate their data pipeline The following architecture diagram shows their solution on AWS Realtime processi ng with Amazon MSK and Stream processing using Apache Spark Streaming on EMR and Amazon Elasticsearch Service with Kibana for dashboards The cab IoT devices collect telemetry data and send to a source hub The source hub is configured to send data in real time to Amazon MSK Using the Apache Kafka producer library APIs Amazon MSK is configured to stream the data into an Amazon EMR cluster The Amazon EMR cluster has a Kafka client and Spark Streaming installed to be able to consume and process the streams of data Spark Streaming has sink connectors which can write data directly to defined indexes of Elasticsearch Elasticsearch cluster s with Kibana can be used for metrics and dashboards Amazon MSK Amazon EMR with Spark Streaming and Amazon ES with Kibana dashboards are all managed services where AWS manages the undifferentiated heavy lifting of infrastructure management of different clusters which enabl es you to build your application using familiar open source soft ware with few clicks The next secti on takes a closer look at these services Amazon Web Services Streaming Data Solutions on AWS 23 Amazon Managed Streaming for Apache Kafka (Amazon MSK) Apache Kafka is an open source platform that enables customers to capture streaming data like click stream events transactions IoT events and application and machine logs With this information you can develop applications that perform real time analyt ics run continuous transformations and distribute this data to data lakes and databases in real time You can use Kafka as a streaming data store to decouple applications from producer and consumers and enable reliable data transfer between the two comp onents While Kafka is a popular enterprise data streaming and messaging platform it can be difficult to set up scale and manage in production Amazon MSK takes care of these managing tasks and makes it easy to set up configure and run Kafka along w ith Apache Zookeeper in an environment following best practices for high availability and security You can still use Kafka's control plane operations and data plane operations to manage producing and consuming data Because Amazon MSK runs and manages o pensource Apache Kafka it makes it easy for customers to migrate and run existing Apache Kafka applications on AWS without needing to make changes to their application code Scaling Amazon MSK offers scaling operations so that user can scale the cluste r actively while its running When creating an Amazon MSK cluster you can specify the instance type of the brokers at cluster launch You can start with a few brokers within an Amazon MSK cluster Then using the AWS Management Console or AWS CLI you can scale up to hundreds of brokers per cluster Alternatively you can scale your clusters by changing the size or family of your Apache Kafka brokers Changing the size or family of your brokers gives you the flexibility to adjust your MSK cluster’s comput e capacity for changes in your workloads Use the Amazon MSK Sizing and Pricing spreadsheet (file download) to determine the correct number of brokers for your Amazon MSK cluster T his spreadsheet provides an estimate for sizing an Amazon MSK cluster and the associated costs of Amazon MSK compared to a similar self managed EC2 based Apache Kafka cluster After creating the MSK cluster you can increase the amount of EBS storage per broker with exception of decreasing the storage Storage volumes remain available during this Amazon Web Services Streaming Data Solutions on AWS 24 scaling up operation It offers two types of scaling operations : Auto Scaling and Manual Scaling Amazon MSK supports automatic expansion of your cluster's storage in response to increased usage using Application Auto Scaling policies Your auto matic scaling policy sets the target disk utilization and the maximum scaling capacity The storage utilization threshold helps Amazon MSK to trigger an auto matic scaling operation To increase storage using manual scaling wait for the cluster to be in the ACTIVE state Storage scaling has a cooldown period of at least six hours between events Even though the operation makes additional storage available right away the service performs optimizations on your cluster that can take up to 24 hours or more The du ration of these optimizations is proportional to your storage size Additionally it also o ffers multi –Availability Zones replication within an AWS Region to provide High Availability Configuration Amazon MSK provides a default configuration for brokers topics and Apache Zookeeper nodes You can also create custom configurations and use them to create new MSK clusters or update existing clusters When you create an MSK cluster without specifying a custom MSK configuration Amazon MSK creates and uses a default configuration For a list of default values see this Apache Kafka Configuration For monitoring purposes Amazon MSK gathers Apache Kafka metrics and sends them to Amazon CloudWatch where you can view them The metrics that you configure for your MSK cluster are automatically collected and pushed to CloudWatch Monitoring consumer lag enables you to identify slow or stuck consumers that aren't keep ing up with the latest data available in a topic When necessary you can then take remedial actions such as scaling or rebooting those consumers Migrating to Amazon MSK Migrating from on premise s to Amazon MSK can be achieved by one of the following methods • MirrorMaker20 — MirrorMaker20 (MM2) MM2 is a multi cluster data replication engine based on Apache Kafka Connect framework MM2 is a combination of an Apache Kafka source connector and a sink connector You can use a single MM2 cluste r to migrate data between multiple clusters MM2 automatically detects new topics and partitions while also ensuring the topic Amazon Web Services Streaming Data Solutions on AWS 25 configurations are synced between clusters MM2 supports migrations ACLs topics config urations and offset translation For mor e details related to migration see Migrating Clusters Using Apache Kafka's MirrorMaker MM2 is used for use cases related to replication of topics config urations and offs et translation automatically • Apache Flink — MM2 supports at least once semantics Records can be duplicated to the destination and the consumers are expected to be idempotent to handle duplicate records In exactly once scenarios semantics are required customers can use Apache Flink It provides an alternative to achieve exactly once semantics Apache Flink can also be used for scenarios where data requires mapping or transformation actions before submission to the destination cluster Apache Flink provi des connectors for Apache Kafka with sources and sinks that can read data from one Apache Kafka cluster and write to another Apache Flink can be run on AWS by launching an Amazon EMR cluster or by running Apache Flink as an application using Amazon Kinesis Data Analytics • AWS Lambda — With support for Apache Kafka as an event source for AWS Lambda customer s can now consume messages from a topic via a Lambda function The AWS Lambda service internally polls for new records or messages from the event source and then synchronously invokes the target Lambda function to consume these messages Lambda reads the messages in batches and provides the message batches to your function in the event payload for processing Consumed messages can then be transformed and/or written directly to your destination Amazon MSK cluster Amazon EMR with Spark Streaming Amazon EMR is a managed cluster platform that simplifies running big data frameworks such as Apache Hadoop and Apache Spark on AWS to process and analyze vast amounts of data Amazon EMR provides the capabilities of Spark and can be used to st art Spark streaming to consume data from Kafka Spark Streaming is an extension of the core Spark API that enables scalable high throughput fault tolerant stream processing of live data streams You c an create an Amazon EMR cluster using the AWS Command Line Interface (AWS CLI) or on the AWS Management C onsole and s elect Spark and Zeppelin in advanced Amazon Web Services Streaming Data Solutions on AWS 26 configurations while creating the cluster As shown in the following architecture diagram data can be ingested from many sources such as Apache Kafka and Kinesis Data Streams and can be processed using complex algorithms expressed with high level functio ns such as map reduce join and window For more information see Transformations on DStreams Processed data can be pushed out to file systems databases and live dashboards Realtime streaming flow from Apache Kafka to Hadoop ecosystem By default Apache Spark Streaming has a micro batch run model However since Spark 23 came out Apache has introduced a new low latency processing mode called Continuous Processing which can achieve end toend latencies as low as one millisecond with at least once guarantees Without changing the Dataset/DataFrames operations in your queries you can choose the mode based on your application requi rements Some of the benefits of Spark Streaming are : • It brings Apache Spark's language integrated API to stream processing letting you write streaming jobs the same way you write batch jobs • It supports Java Scala and Python • It can recover both lost work and operator state ( such as sliding windows) out of the box without any extra code on your part • By running on Spark Spark Streaming lets you reuse the same code for batch processing join streams against historical data or run ad hoc queries on the stream state and build powerf ul interactive applications not just analytics Amazon Web Services Streaming Data Solutions on AWS 27 • After the data stream is processed with Spark Streaming Elasticsearch Sink Connector can be used to write data to the Amazon ES cluster and in turn Amazon ES with Kibana dashboards can be used as consump tion layer Amazon Elasticsearch Service with Kibana Amazon ES is a managed service that makes it easy to deploy operate and scale Elasticsearch clusters in the AWS Cloud Elasticsearch is a popular open source search and analytics engine for use cases such as log analytics real time application monitoring and clickstream analysis Kibana is an open source data visualization and exploration tool used for log and time series analytics application monitoring and operational intelligence use cases It offers powerful and easy touse features such as histograms line graphs pie charts heat maps and built in geospatial support Kibana provides tight integration with Elasticsearch a popular analytics and search engine which makes Kibana the default choice for visualizing data stored in Elasticsearch Amazon ES provides an installation of Kibana with every Amazon ES domain You can find a link to Kibana on your domain dashboard on the Amazon ES console Summary With Apache Kafka o ffered as a managed service on AWS you can focus on consumption rather than on managing the coordination between the brokers which usually requires a detailed understanding of Apache Kafka Features such as h igh availability broker scalability and granular access control are managed by the Amazon MSK platform ABC1Cabs utilize d these services to build production application without needing infrastructure management expertise They could focus on the processing layer to consume data from Amazon MSK and further propagate to visualization layer Spark Streaming on Amazon EMR can help realtime analytics of streaming data and publish ing on Kibana on Amazon Elasticsearch Service for the visualization layer Amazon Web Services Streaming Data Solutions on AWS 28 Conclusion This document reviewed several scenarios for streaming workflow s In these scenarios streaming data processing provided the example companies with the ability to add new features and functional ity By analyzing data as it gets created you will gain insights into what your business is doing right now AWS streaming services enable you to focus on your application to make time sensitive business decisions rather than deploying and managing the infrastructure Contributors The following individuals and organizations contributed to this document: • Amalia Rabinovitch Sr Solutions Architect AWS • Priyanka Chaudhary Data Lake Data Architect AWS • Zohair Nasimi Solutions Architect AWS • Rob Kuhr Solutions Architect AWS • Ejaz Sayyed Sr Partner Solutions Architect AWS • Allan MacInnis Solutions Architect AWS • Chander Matrubhutam Product Marketing Manager AWS Document versions Date Description September 01 2021 Updated for technical accuracy September 07 2017 First publication
General
Best_Practices_for_Migrating_from_RDBMS_to_Amazon_DynamoDB
This paper has been archived For the latest technical content refer t o the HTML version: : https://docsawsamazoncom/whitepapers/latest/best practicesformigratingfromrdbmstodynamodb/ welcomehtml Best Practices for Migrating from RDBMS to Amazon DynamoDB Leverage the Power of NoSQL for Suitable Workloads Nathaniel Slater March 2015 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 2 of 24 Contents Contents 2 Abstract 2 Introduction 2 Overview of Amazon DynamoDB 4 Suitable Workloads 6 Unsuitable Workloads 7 Key Concepts 8 Migrating to DynamoDB from RDBMS 13 Planning Phase 13 Data Analysis Phase 15 Data Modeling Phase 17 Testing Phase 21 Data Migration Phase 22 Conclusion 23 Cheat Sheet 23 Further Reading 23 Abstract Today software architects and developers have an array of choices for data storage and persistence These include not only traditional relational database management systems ( RDBMS) but also NoSQL databases such as Amazon DynamoDB Certain workloads will scale better and be more cost effective to run using a NoSQL solution This paper will highlight the best practices for migrating these workloads from an RDBMS to DynamoDB We will disc uss how NoSQL databases like DynamoDB differ from a traditional RDBMS and propose a framework for analysis data modeling and migration of data from an RDBMS into DynamoDB Introduction For decades the RDBMS was the de facto choice for data storage and persistence Any data driven application be it an e commerce website or an expense reporting This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 3 of 24 system was almost certain to use a relational database to retrieve and store the data required by the application T he reasons for this are numerous and include the following: • RDBMS is a mature and stable technology • The query language SQ L is feature rich and versatile • The servers that run an RDBMS engine are typically some of the most stable and powerful in the IT infrastructure • All major programming languages contain support for the drivers used to communicate with an RDBMS as well as a rich set of tools for simplifying the development of database driven applications These factors and many others have supported this incumbency of the RDBMS For architects and software developers there simply wasn’t a reasonable alternative for data storage and persistence – until now The growth of “internet scale” web applications such as e commerce and social media the explosion of connected devices like smart phones and tablets and the rise of big data have resulted in new workloads that t raditional relational database s are not well suited to handle As a system designed for transaction processing the fundamental properties that all RDBMS must support are defined by the acronym ACID: Atomicity Consistency Isolation and Durability Atom icity means “all or nothing” – a transaction executes completely or not at all Consistency means that the execution of a transaction causes a valid state transition Once the transaction has been committed the state of the resulting data must conform to the constraints imposed by the database schema Isolation requires that concurrent transactions execute separately from one another The isolation property guarantees that if concurrent transactions were executed in serial the end state of the data would be the same Durability requires that the state of the data once a transaction executes be preserved In the event of power or system failure the database should be able to recover to the last known state These ACID properties are all desirable but support for all four requires an architecture that poses some challenges for today’s data intensive workloads For example consistency requires a well defined schema and that all data stored in a database conform to that schema This is great for ad hoc queries and read heavy workloads For a workload consisting almost entirely of writes such a s the saving of a player ’s state in a gaming application this enforcement of schema is expensive from a storage and compute standpoint The game developer benef its little by forcing this data into rows and tables that relate to one another thr ough a welldefined set of keys This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 4 of 24 Consistency also requires locking some portion of the data until the transaction modifying it completes and then making the change immediately visible For a bank transaction which debits one account and credits another this is required This type of transaction is called “strongly consistent” For a social media application on the other hand there really is no requirement that all users see an update to a data feed at precisely the same time In this latter case the transaction is “eventually consistent” It is far more important that the social media application scale to handle potentially millions of simultaneous users even if those users see changes to the data at different times Scaling an RDBMS to handle this level of concurrency while maintaining strong consistency requires upgrading to more powerful (and often proprietary) hardware This is called “scaling up” or “vertical scaling” and it usually carries an extremely high cost The more cost effective way to scale a database to support this level of concurrency is to add server instances running on commodity hardware This is called “scaling out” or “horizontal scaling” an d it is typically far more cost effective than vertical scaling NoSQL databases like Amazon DynamoDB ad dress the scaling and performance challenges found with RDBMS The term “NoSQL” simply means that the database doesn’t follow the relational model e spoused by EF Codd in his 1970 paper A Relational Model of Data for Large Shared Data Banks 1 which would become the basis for all modern RDBMS As a result NoSQL databases vary much more widely in features and functionality than a traditional RDBMS T here is no common query language analogous to SQL and query flexibility is generally replaced by high I/O performance and horizontal scalability NoSQL databases don’t enforce the notion of schema in the same way as an RDBMS Some may store semi structured data like JSON Others may store r elated values as column sets Still others may simply store key/value pairs The net result i s that NoSQL databases trade some of the query capabilities and ACID properties of an RDBMS for a much more flexible dat a model that scales horizontally These characteristics make NoSQL databases an excellent choice in situations where use of an RDBMS for non relational workloads (like the aforementioned game state example) is resulting in some combination of performance bottlenecks operational complexity and rising cos ts DynamoDB offers solutions to all these problems and is an excellent platform for migrating these workloads off of an RDBMS Overview of Amazon DynamoDB Amazon DynamoDB is a fully managed NoSQL database service running in the AWS cloud The complexity of running a massively scalable distributed NoSQL database is managed by the service itself allowing software developers to focus on building applications rather than managing infrastructure NoSQL databases are designed for scale but their architectures are sophisticated and there can be significant operational 1 http://wwwseasupennedu/~zives/03f/cis550/coddpdf This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 5 of 24 overhead in running a large NoSQL cluster Instead of having to become experts in advanced distributed computing concepts the developer need only to learn DynamoDB’s straightforward API using the SDK for the programming language of choice In addition to being easy to use DynamoDB is also cost effective With D ynamoDB you pay for the storage you are consuming and the IO throughput y ou have provisioned It is designed to scale elastically When the storage and throughput requirements of an application are low only a small amount of capacity needs to be provisioned in the DynamoDB service As the number of users of an application g rows and the required IO throughput increases additional capacity can be provisioned on the fly This enables an application to seamlessly grow to support millions of users making thousands of concurrent requests to the database every second Tables are the fundamental construct for organizing and storing data in DynamoDB A table consists of items An item is composed of a primary key that uniquely identifies it and key/val ue pairs called attributes While an item is similar to a row in an RDBMS table all the items in the same DynamoDB table need not share th e same set of attributes in the way that all rows in a relational table share the same columns Figure 1 shows the structure of a DynamoDB table and the items it contains There is no concept of a column in a DynamoDB table Each item in the table can be expressed as a tuple containing an arbitrary number of elements up to a maximum size of 400 K This data model is well suited for storing data in the formats commonly used for object serializ ation and messaging in distributed systems As we will see in the next section workloads that involve this type of data are good candidates to migrate to DynamoDB Figure 1: DynamoDB Table Structure table items Attributes (name/value pairs) This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 6 of 24 Tables and items are created updated and deleted through the DynamoDB API There is no conc ept of a standard DML language like there is in the relational database world Manipulation of data in DynamoDB is done programmatically through object oriented code It is possible to query data in a DynamoDB table but this too is done programmatically through the API Because there is no generic query language like SQL it’s important to unders tand your application’s data access patterns well in order to make the most effective use of DynamoDB Suitable Workloads DynamoDB is a NoSQL database which means that it will perform best for workloads involving non relational data Some of the more common use cases for non relational workloads are: • AdTech o Capturing browser cookie state • Mobile applications o Storing application data and session state • Gaming applications o Storing user preferences and application state o Storing player s’ game state • Consumer “voting” applications o Reality TV contests Superbowl commercials • Large Scale Websites o Session state o User data used for personalization o Access control • Application monitoring o Storing application log and event data o JSON data • Internet of Things o Sensor data and log ingestion This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 7 of 24 All of t hese use cases benefit from some combination of the features that make NoSQL databases so powerful Ad Tech applications typically require extremely low latency which is well suited for DynamoDB’s low single digit millisecond re ad and write performance Mobile applications and consumer voting applications often have millions of users and need to handle thousands of requests per second DynamoDB can scale horizontally to meet this load Finally application monitoring solutions typically ingest hundreds of thousands of data points per minute and DynamoDB’s sche maless data model high IO performance and support for a native JSON data type is a great fit for these types of applications Another important characteristic to consi der when determining if a workload is suitable for a NoSQL database like DynamoDB is whether it requires horizontal scaling A mobile application may have millions of users but each installation of the applicati on will only read and write session data fo r a single user This means the user session data in the DynamoDB table can be distributed across multiple storage partitions A read or write of data for a given user will be confined to a single partition This allows the DynamoDB table to scale horizontally —as more users are added more partitions are created As long as requests to read and write this data are uniformly d istributed across partitions DynamoDB will be able to handle a very large amount of concurrent data access This type of horizontal scaling is difficult to achieve with an RDBMS without the use of “sharding” which can add significant complexity to an a pplication’s data access layer When data in an RDBMS is “sharded” it is split across different database instances This requires maintaining an index of the instances and the range of data they contain In order to read and write data a client applic ation needs to know which shard contains the range of data to be read or written Sharding also adds administrative overhead and cost – instead of a single database instance you are now responsible for keeping several up and running It’s also important to evaluate the data consistency requirement of an application when determining if a workload would be suitable for DynamoDB There are actually two consistency models supported in DynamoDB: strong and eventual consistency with the former requiring more p rovisioned IO than the latter This flexibility allows the developer to get the best possible performance from the database while being able to support the consistency requirements of the application If an application does not require “strongly consisten t” reads meaning that updates made by one client do not need to be immediately visible to others then use of an RDBMS that will force strong consistency can result in a tax on performance with no net benefit to the application The reason is that strong consistency usually involves having to lock some portion of the data which can cause performance bottlenecks Unsuitable Workloads This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 8 of 24 Not all workloads are suitable for a NoSQL database like DynamoDB While in theory one could implement a classic entity relationship model using DynamoDB tables and items in practice this would be extremely cumbersome to work with Transactional systems that require well defined relationships between entities are still best implemented using a traditional RDBMS Some o ther unsuitable workloads include: • Adhoc queries • OLAP • BLOB storage Because DynamoDB does not support a standard query language like SQL and because there is no concept of a table join constructing ad hoc queries is not as efficient as it is with RDBMS Running such queries with DynamoDB is possible but requires the use of Amazon EMR and Hive Likewise OLAP applications are difficult to deliver as well because the dimensional data model used for analytical processing requires joining fact tables to d imension tables Finally due to the size limitation of a DynamoDB item storing BLOBs is often not practical DynamoDB does support a binary data type but this is not suited for storing large binary objects like images or documents However storing a pointer in the DynamoDB table to a large BLOB stored in Amazon S3 easily supports this last use case Key Concepts As described in the previous section Dynam oDB organizes data into tables consisting of items Each item in a DynamoDB table can define a n arbitrary set of attributes but all items in the table must define a primary key that uniquely identifies the item This key must contain an attribute known as the “hash key” a nd optionally an attribute called the “range ke y” Figure 2 shows the structure of a DynamoDB table defining both a hash and range key This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 9 of 24 Figure 2: DynamoDB Table with Hash and Range Keys If an item can be uniquely identified by a single attribute value then this attribute can function as the hash key In other cases an item may be uniquely identified by two values In this case the primary key will be defined as a composite of the has h key and the range key Figure 3 demonstrates this concept An RDBMS table relating media files with the codec used to trans code them can be modeled as a single table in DynamoDB using a primary key con sisting of a hash and range key Note how the data is de normalized in the DynamoDB table This is a common practice when migrating data from an RDBMS to a NoSQL database and will be discussed in more detail later in this paper Hash key Range key (DynamoDB maintains a sorted index) This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 10 of 24 Figure 3: Example of Hash and Range Keys The ideal hash key will contain a large number of distinct values uniformly distributed across the items in the table A user ID is a good example of an attribute that tends to be uniformly distributed across items in a table Attributes that would be modeled as lookup values o r enumerations in an RDBMS tend to make poor hash keys The reason is that certain values may occur much more frequently than others These concepts are shown in Figure 4 Notice how the counts of user_id are uniform whereas the counts of status_code a re not If the status_code is used as a hash key in a DynamoDB table the value that occurs most frequently will end up being stored on the same partition and this means that most reads and writes will be hitting that single partition This is called a “hot partition” and this will negatively impact performance select user_id count(*) as total from user_preferences group by user_id user_id total 8a9642f7 51554138bb63870cd45d7e19 1 31667c72 86c54afb82a1a988bfe34d49 1 693f8265 b0d240f1add0bbe2e8650c08 1 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 11 of 24 select status_code count(*) as total from status_code sc log l where lstatus_code_id = scstatus_code_id status_code total 400 125000 403 250 500 10000 505 2 Figure 4: Uniform and NonUniform Distribution of Potential Key Values Items can be fetched from a table using the primary key Often it is useful to be able to fetch items using a different set of values than the hash and the range keys DynamoDB supports these operations t hrough local and global secondary indexes A local secondary index uses the same hash key as defined on the table but a different attribute as the range key Figure 5 shows how a local secondary index is defined on a table A global secondary index can use any scalar attribute as the hash or range key Fetching items using secondary indexes is done using the query interface defined in the DynamoDB API Figure 5: A Local Secondary Index Because there are limits to the number of local and global secondary indexes that can exist per table it is important to fully understand the data access requirements of any application that uses DynamoDB for persistent storage In addition global secondary indexes require that attribute values be projected into the index What this means is that when an index is created a subset of attributes from the parent table need to be selected for inclusion into the index When an item is queried using a globa l secondary index the only attributes that will be populated in the returned item are those that have Range key LSI key Hash key This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 12 of 24 been projected into the index Figure 6 demonstrates this concept Note how the original hash and range key attributes are automatically promoted in the global secondary index Reads on global secondary indexes are always eventually consistent whereas local secondary indexes support eventual or strong consistency Finally both local and global secondary indexes use provisioned IO (discussed in detail below) for reads and writes to the index This means that each time an item is inserted or updated in the main table any secondary indexes will consume IO to update the index Figure 6: Create a global secondary index on a table Whenever an item is read from or written to a DynamoDB table or index the amount of data required to perform the read or write operation is expressed as a “read unit” or “write unit” A read unit consists of 4K of data and a write unit is 1K This means that fetching an item of 8K in size will consume 2 read units of data Inserting the item would consume 8 write units of data The number of read and write units allowed per second is known as the “provisioned IO” of the table If your application requires that 1000 4K items be written per second then the provisioned write capacity of the table would need to be a minimum of 4000 write units per second When an insufficient amount of read or write capacity is provisi oned on a table the DynamoDB service will “throttle” the read and write operations This can result in poor performance and in some cases throttling exceptions in the client application For this reason it is important to understand an application ’s IO requirements when designing the tables However both read and write capacity can be altered on an existing table and if an application suddenly experiences a spike in usage that results in throttling the provisioned IO can be increased to handle the n ew workload Similarly if load decreases for some reason the provisioned IO can be reduced This ability to dynamically alter the IO characteristics of a table is a key differentiator between DynamoDB and a traditional RDBMS in which IO throughput is going to be fixed based on the underlying hardware the database engine is running on Choose which attributes to promote (if any) This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 13 of 24 Migrating to DynamoDB from RDBMS In the previous section we discussed some of the key features of DynamoDB as well as some of the key differences between DynamoDB and a traditional RDBMS In this section we will propose a strategy for migrating from an RDBMS to DynamoDB that takes into account these key features and differences Because database migrations tend to be complex and risky we advocate taking a phased ite rative approach As is the case with the adoption of any new technology it’s also good to focus on the easiest use cases first It’s also important to remember as we propose in this section that migration to DynamoDB doesn’t need to be an “all or not hing” process For certain migrations it may be feasible to run the workload on both DynamoDB and the RDBMS in parallel and switch over to DynamoDB only when it’s clear that the migration has succeeded and the application is working properly The follow ing state diagram expresses our proposed migration strategy: Figure 7: Migration Phases It is important to note that this process is iterative The outcome of certain states can result in a return to a previous state Oversights in the data analysis an d data modeling phase may not become apparent until testing In most cases it will be necessary to iterate over these phases multiple times before reaching the final data migration state Each phase will be discussed in detail in the sections that follo w Planning Phase The first part of the planning phase is to identify the goals of the data migration These often include (but are not limited to): • Increasing application performance • Lowering costs • Reducing the load on an RDBMS In many cases the goals of a migration may be a combination of all of the above Once these goals have been defined they can be used to inform the identification of the This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 14 of 24 RDMBS tables to migrate to DynamoDB As we mentioned previously tables being used by workloads involving non relational data make excellent choices for migration to DynamoDB Migration of such tables to DynamoDB can result in significantly improved application performance as well as lower costs and lower loads on the RDBMS Some good candidates for migration are: • Entity Attribute Value tables • Application session state tables • User preference tables • Logging tables Once the tables have been identified any characteristics of the source tables that may make migration challenging should b e documented This information will be essential for choosing a sound migration strategy Let’s take a look at some of the more common challenges that tend to impact the migration strategy : Challenge Impact on Migration Strategy Writes to the RDBMS sour ce table cannot be acquiesced before or during the migration Synchronization of the data in the target DynamoDB table with the source will be difficult Consider a migration strategy that involves writing data to both the source and target tables in parallel The amount of data in the source table is in excess of what can reasonably be transferred with the existing network bandwidth Consider exporting the data from the source table to removable disks and using the AWS Import/Export service to import the data to a bucket in S3 Import this data into DynamoDB directly from S3 Alternatively reduce the amount of data that needs to be migrated by exporting only those records that were created after a recent point in time All data older than that point will remain in the legacy table in the RDBMS The data in the source table needs to be transformed before it can be imported into Export the data from the source table and transfer it to S3 Consider using EMR to perform the This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 15 of 24 Challenge Impact on Migration Strategy DynamoDB data transforma tion and import the transformed data into DynamoDB The primary key structure of the source table is not portable to DynamoDB Identify column(s) that will make suitable hash and range keys for the imported items Alternatively consider adding a surrog ate key (such as a UUID) to the source table that will act as a suitable hash key The data in the source table is encrypted If the encryption is being managed by the RDBMS then the data will need to be decrypted when exported and re encrypted upon import using an encryption scheme enforced by the application not the underlying database engine The cryptographic keys will need to be managed outside of DynamoDB Table 1: Challenges that Impact Migration Strategy Finally and perhaps most importantly the backup and recovery process should be defined and documented in the planning phase If the migration strategy requires a full cutover from the RDBMS to DynamoDB defining a process for restoring functionality using the RDBMS in the event the migration fails is essential To mitigate risk consider running the workload on DynamoDB and the RDBMS in parallel for some length of time In this scenario the legacy RDBMS based application can be disabled only once the workload has been sufficiently tested in production using DynamoDB Data Analysis Phase The purpose of the data analysis phase is to understand the composition of the source data and to identify the data access patterns used by the application This information is required input into the data modeling phase It is also essential for understanding the cost and performance of running a workload on DynamoDB The analysis of the source data should include: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 16 of 24 • An estimate of the number of items to be imported into DynamoDB • A distribution of the item sizes • The multiplicity of values to be used as hash or range keys DynamoDB pricing contains two main components – storage and provisioned IO By estimating the number of items that will be imported into a DynamoDB table and the approximate size of each item the storage and the provisioned IO requirements for the table can be calculated Common SQL data types will map to one of 3 scalar types in DynamoDB: string number and binary The length of the number data type is variable and strings are encoded using binary UTF 8 Focus should be placed on the attributes with the largest values when estimating item size as provisioned I OPS are given in integral 1K increments —there is no concept of a fractional IO in DynamoDB If an item is estimated to be 33K in size it will require 4 1K write IO units and 1 4K read IO unit to write and read a single item respectively Since the siz e will be rounded to the nearest kilobyte the exact size of the numeric types is unimportant In most cases even for large numbers with high precision the data will be stored using a small number of bytes Because each item in a table may contain a var iable number of attributes it is useful to compute a distribution of item sizes and use a percentile value to estimate item size For example one may choose an item size representing the 95th percentile and use this for estimating the storage and provisioned IO costs In the event that there are too many rows in the source table to inspect individually take samples of the source data and use these for computing the item size distribution At a minimum a table should have enough provisioned read and write units to read and write a single item per second For example if 4 write units are required to write an item with a size equal to or less than the 95 th percentile than the table should have a minimum provisioned IO of 4 write units per second Anything less than this and the write of a single item will cause throttling and degrade performance In practice the number of provisioned read and write units will be much larger than the required minimum An application using DynamoDB for data storage will typically need to issue read and writes concurrently Correctly estimating the provisioned IO is key to both guaranteeing the required application performance as well as understanding cost Understanding the distribution frequency of RDBMS colu mn values that could be hash or range keys is essential for obtaining maximum performance as well Columns containing values that are not uniformly distributed (ie some values occur in much larger numbers than others) are not good hash or range keys because accessing items with keys occurring in high frequency will hit the same DynamoDB partitions and this has negative performance implications The second purpose of the data analysis phase is to categorize the data access patterns of the application Because DynamoDB does not support a generic query language like SQL it is essential to document that ways in which data will be written to and read from This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 17 of 24 the tables This information is critical for the data modeling phase in which the tables the key structure and the indexes will be defined Some com mon patterns for data access are: • Write Only – Items are written to a table and never read by the application • Fetches by distinct value – Items are fetched in dividually by a value that uniquely identifies the item in the table • Queries across a range of values – This is seen frequently with temporal data As we will see in the next section documentation of an application’s data access patterns using categories such as those described above will drive much of the data modeling decisions Data Modeling Phase In this phase the tables hash and range keys and secondary indexes w ill be defined The data model produced in this phase must support the data access patterns described in the data analysis phase The first step in data modeling is to determine the hash and range keys for a table The primary key whether consisting only of the hash key or a composite of the hash and range key must be unique for all items in the table When migrating data from an RDBMS it is tempting to use the primary key of the source table as the hash key But in reality this key is often semantically meaningless to the application For example a User table in an RDBMS may define a numeric primary key but an application responsible for logging in a user will ask for an email address not the numeric user ID In this case the email address is the “natural key” and would be better suited as the hash key in the DynamoDB table as items can easily be fetched by their hash key values Modeling the hash key in this way is appropriate for data access patterns that fetch items by distinct value For other data access patterns like “write only” using a randomly generated numeric ID will work well for the hash key In this case the items will never be fetched from the table by the application and as such the key will only be used to uniquely identify the items not a means of fetching data RDBMS tables that contain a unique index on two key values are good candidates for defining a primary key using both a hash key and a range key Intersection tables used to define many tomany relationships in an RDBMS are typically modeled using a unique index on the key values of both sides of the relationship Because fetching data i n a many tomany relationship requires a series of table joins migrating such a table to DynamoDB would also involve de normalizing the data (discussed in more detail below) Date values are also often used as range keys A table counting the number of t imes a URL was visited on any given day could define the URL as the hash key and the date as the range key As with primary keys consisting solely of a hash key fetching items with a composite primary key requires the application to specify both the hash and range key This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 18 of 24 values This needs to be considered when evaluating whether a surrogate key or a natural key would make the better choice for the hash and or range key Because non key attributes can be added to an item arbitrarily the only attributes th at must be specified in a DynamoDB table definition are the hash key and (optionally) the range key However if secondary indexes are going to be defined on any non key attributes then these must be included in the table definition Inclusion of non key attributes in the table definition does not impose any sort of schema on all the items in the table Aside from the primary key each item in the table can have an arbitrary list of attributes The support for SQL in an RDBMS means that records can be f etched using any of the column values in the table These queries may not always be efficient – if no index exists on the column used to fetch the data a full table scan may be required to locate the matching rows The query interface exposed by the Dyn amoDB API does not support fetching items from a table in this way It is possible to do a full table scan but this is inefficient and will consume substantial read units if the table is large Instead items can be fetched from a DynamoDB table by the primary key of the table or the key of a local or global secondary index defined on the table Because an index on a non key column of an RDBMS table suggests that the application commonly queries for data on this value these attributes make good candidates for local or global secondary indexes in a DynamoDB table There are limits to the number of secondary indexes allowed on a DynamoDB table 2 so it is important to choose define keys for these indexes using attribute values that the application will use most frequently for fetching data DynamoDB does not support the concept of a table join so migrating data from an RDBMS table will often re quire denormalizing the data To those used to working with an RDBMS this will be a foreign and perhaps uncomfortable concept Since the workloads most suitable for migrating to DynamoDB from an RDMBS tend to involve nonrelational data denormalizatio n rarely poses the same issues as it would in a relational data model For example if a relational database contains a User and a UserAddress table related through the UserID one would combine the User attributes with the Address attributes into a sing le DynamoDB table In the relational database normalizing the User Address information allows for multiple addresses to be specified for a given user This is a requirement for a contact management or CRM system But in DynamoDB a User table would likely serve a different purpose —perhaps keeping track of a mobile application’s registered users In this use case the multiplicity of Users to Addresses is less important than scalability and fast retrieval of user records 2 http://docsawsamazoncom/amazondynamodb/latest/developerguide/Limitshtml This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 19 of 24 Data Modeling Example Let’s walk through an example that combines the concepts described in this section and the previous This example will demonstrate how to use secondary indexes for efficient data access and how to estimate both item size and the required amount of provisioned IO for a DynamoDB table Figure 8 contains an ER diagram for a schema used to track events when processing orders placed online through an e commerce portal Both the RDBMS and DynamoDB table structures are shown Figure 8: RDBMS and DynamoDB schem a for tracking events The number of rows that will be migrated is around 10! so computing the 95th percentile of item size iteratively is not practical Instead we will perform simple random sampling with replacement of 10! rows This will give us adeq uate precision for the purposes of estimating item size To do this we construct a SQL view that contains the fields that will be inserted into the DynamoDB table Our sampling routine then randomly selects 10 ! rows from this view and computes the size representing the 95th percentile This statistical sampling yields a 95th percentile size of 66 KB most of which is consumed by the “Data” attribute (which can be as large as 6KB in size) The minimum number of write units required to write a single i tem is: 𝑐𝑒𝑖𝑙𝑖𝑛𝑔 (66𝐾𝐵 𝑝𝑒𝑟 𝑖𝑡𝑒𝑚 1𝐾𝐵 𝑝𝑒𝑟 𝑤𝑟𝑖𝑡𝑒 𝑢𝑛𝑖𝑡 )=7 𝑤𝑟𝑖𝑡𝑒 𝑢𝑛𝑖𝑡𝑠 𝑝𝑒𝑟 𝑖𝑡𝑒𝑚 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 20 of 24 The minimum number of read units required to read a single item is computed similarly: 𝑐𝑒𝑖𝑙𝑖𝑛𝑔 (66𝐾𝐵 𝑝𝑒𝑟 𝑖𝑡𝑒𝑚 4𝐾𝑏 𝑝𝑒𝑟 𝑟𝑒𝑎𝑑 𝑢𝑛𝑖𝑡 )=2 𝑟𝑒𝑎𝑑 𝑢𝑛𝑖𝑡𝑠 𝑝𝑒𝑟 𝑖𝑡𝑒𝑚 This particular workload is write heavy and we need enough IO to write 1000 events for 500 orders per day This is computed as follows: 500 𝑜𝑟𝑑𝑒𝑟𝑠 𝑝𝑒𝑟 𝑑𝑎𝑦 × 1000 𝑒𝑣𝑒𝑛𝑡𝑠 𝑝𝑒𝑟 𝑜𝑟𝑑𝑒𝑟 = 5 ×10! 𝑒𝑣𝑒𝑛𝑡𝑠 𝑝𝑒𝑟 𝑑𝑎𝑦 5 × 10!𝑒𝑣𝑒𝑛𝑡𝑠 𝑝𝑒𝑟 𝑑𝑎𝑦 × 86400 𝑠𝑒𝑐𝑜𝑛𝑑𝑠 𝑝𝑒𝑟 𝑑𝑎𝑦 =578 𝑒𝑣𝑒𝑛𝑡𝑠 𝑝𝑒𝑟 𝑠𝑒𝑐𝑜𝑛𝑑 𝑐𝑒𝑖𝑙𝑖𝑛𝑔 578 𝑒𝑣𝑒𝑛𝑡𝑠 𝑝𝑒𝑟 𝑠𝑒𝑐𝑜𝑛𝑑 × 7 𝑤𝑟𝑖𝑡𝑒 𝑢𝑛𝑖𝑡𝑠 𝑝𝑒𝑟 𝑖𝑡𝑒𝑚 =41 𝑤𝑟𝑖𝑡𝑒 𝑢𝑛𝑖𝑡𝑠 𝑝𝑒𝑟 𝑠𝑒𝑐𝑜𝑛𝑑 Reads on the table only happen once per hour when the previous hour’s data is imported into an Amazon Elastic Map Reduce cluster for ETL This operation uses a query that selects items from a given date range (which is why the EventDate attribute is both a range key and a global secondary index) The number of read units (which will be provisioned on the global secondary index) required to retrieve the results of a query is based on the size of the results re turned by the query: 578 𝑒𝑣𝑒𝑛𝑡𝑠 𝑝𝑒𝑟 𝑠𝑒𝑐𝑜𝑛𝑑 × 3600 𝑠𝑒𝑐𝑜𝑛𝑑𝑠 𝑝𝑒𝑟 ℎ𝑜𝑢𝑟 =20808 𝑒𝑣𝑒𝑛𝑡𝑠 𝑝𝑒𝑟 ℎ𝑜𝑢𝑟 20808 𝑒𝑣𝑒𝑛𝑡𝑠 𝑝𝑒𝑟 ℎ𝑜𝑢𝑟 × 66𝐾𝐵 𝑝𝑒𝑟 𝑖𝑡𝑒𝑚 1024𝐾𝐵 =13411𝑀𝐵 𝑝𝑒𝑟 ℎ𝑜𝑢𝑟 The maximum amount of data re turned in a single query operation is 1MB so pagination will be required Each hourly read query will require reading 135 pages of data For strongly consistent reads 256 read units are required to read a full page at a time (the number is half as much for eventually consistent reads) So to support this particular workload 256 read units and 41 write units will be required From a practical standpoint the write units would likely be expressed in an even number like 48 We now have all the data we need to estimate the DynamoDB cost for this workload: 1 Number of items ( 10 !) 2 Item size (7KB) 3 Write units (48) 4 Read units (256) These can be run through the Amazon Simple Monthly Calculator3 to derive a cost estimate 3 http://calculators3amazonawscom/indexhtml This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 21 of 24 Testing Phase The testing phase is the most important part of the migration strategy It is during this phase that the entire migration process will be tested end toend A comprehensive test plan should minimally contain the following: Test Category Purpose Basic Acceptance Tests These tests should be automatically executed upon completion of the data migration routines Their primary purpose is to verify whether the data migration was successful Some common outputs from these tests will include: • Total # items processed • Total # items imported • Total # items skipped • Total # warnings • Total # errors If any of these totals reported by the tests deviate from the expected values then it means the migration was not successful and the issues need to be resolved before moving to the next step in the process or the next round of testing Functional Tests These tests exercise the functionality of the application(s) using DynamoDB for data storage They will include a combination of automated and manual tests The primary purpose of the functional tests is to identify problems in the application caused by the migration of the RDBMS data to DynamoDB It is during this round of testing that gaps in the data model are often revealed NonFunctional Tests These tests will assess the non functional characteristics of the application such as performance under varying levels of load and resiliency to failure of any portion of This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 22 of 24 Test Category Purpose the application stack These tests can also include boundary or edge cases that are low probability but could negatively impact the application (for example if a large number of clients try to update the same record at the exact same time) The backup and recovery process defined in the planning phase should also be included in nonfunctional testing User Acceptance Tests These tests should be executed by the end users of the application(s) once the final data migration has completed The purpose of these tests is for the end users to decide if the application is sufficiently usable to meet it’s primary function in t he organization Table 2: Data Migration Test Plan Because the migration strategy is iterative these tests will be executed numerous times For maximum efficiency consider testing the data migration routines using a sampling from the production data if the total amount of data to migrate is large The outcome of the testing phase will often require revisiting a previous phase in the process The overall migration strategy will become more refined through each iteration through the process and once al l the tests have executed successfully it will be a good indication that it is time for the next and final phase: data migration Data Migration Phase In the data migration phase the full set of production data from the source RDBMS tables will be migr ated into DynamoDB By the time this phase is reached the end to end data migration process will have been tested and vetted thoroughly All the steps of the process will have been carefully documented so running it on the production data set should be as simple as following a procedure that has been executed numerous times before In preparation for this final phase a notification should be sent to the application users alerting them that the application will be undergoing maintenance and (if required) downtime Once the data migration has completed the user acceptance tests defined in the previous phase should be executed one final time to ensure that the application is in a usable state In the event that the migration fails for any reason the b ackup and recovery procedure —which will also have been thoroughly tested and vetted at this point —can be executed When the system is back to a stable state a root cause analysis of the failure should be conducted and the data migration rescheduled once the This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 23 of 24 root cause has been resolved If all goes well the application should be closely monitored over the next several days until there is sufficient data indicating that the application is functioning normally Conclusion Leveraging DynamoDB for suitable workloads can result in lower costs a reduction in operational overhead and an increase in performance availability and reliability when compared to a traditional RDBMS In this paper we proposed a strategy for identifying and migrating suitable workloads from an RDBMS to DynamoDB While implementing such a strategy will require careful planning and engineering effort we are confident that the ROI of migrating to a fully managed NoSQL solution like DynamoDB will greatly exceed the upfront cost associated with the effort Cheat Sheet The following is a “cheat sheet” summarizing some of the key concepts discussed in this paper and the sections where those concepts are detailed: Concept Section Determining suitable wor kloads Suitable Workloads Choosing the right key structure Key Concepts Table indexing Data Modeling Phase Provisioning read and write throughput Data Modeling Example Choosing a migration strategy Planning Phase Further Reading For additional help please consult the following sources: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 24 of 24 • DynamoDB Developer Guide4 • DynamoDB Website5 4 http://docsawsamazoncom/amazondynamodb/latest/developerguide/GettingStartedDynamoDBhtml 5 http://awsamazoncom/dynamodb
General
Oracle_WebLogic_Server_12c_on_AWS
ArchivedOracle WebLogic Server 12c on AWS December 2018 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers/ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 2 © 201 8 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or service s each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 3 Contents Introduction 5 Oracle WebLogic on AWS 6 Oracle WebLogic Components 6 Oracle WebLogic Architecture on AWS 8 Auto Scaling your Oracle WebLogic Cluster 15 Monitoring your Infrastructure 19 AWS Security and Compliance 20 Oracle WebLogic on AWS Use Cases 23 Conclusion 24 Contributors 25 Document Revisions 25 ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 4 Abstract This whitepaper provides guidance on how to deploy Oracle WebLogic Server 12cbased applications on AWS This paper provides a reference architecture and information about best practices for high availability security scalability and performance when yo u deploy Oracle WebLogic Server 12cbased applications on AWS Also included is information about cost optimization using AWS A uto Scaling The target audience of this whitepaper is Solution Architects Systems Architects and System Administrators with a basic understanding of cloud computing AWS and Oracle WebLogic 12c ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 5 Introduction Many enterprises today rely on J2EE application servers for deploying their mission critical applications Oracle Web Logic Server is a popular Java application server for deploying such applications You can use various AWS services to deploy Oracle WebLogic Server 12cbased applications on AWS in a secure highly available and cost effective manner With auto scaling you can dynamically scale the compute resou rces required for your application thereby keeping your costs low and using Amazon Elastic File System (EFS) for shared storage This whitepaper assumes that you have a basic understanding of Amazon Web Services For an overview of AWS Services see Overview of Amazon Web Services ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 6 Oracle WebLogic on AWS It is important to have a good understanding of the architecture of Oracle WebLogic Server 12c ( Oracle WebLogic ) and the major WebLogic components to successfully deploy and configure it on AWS Oracle WebLogic Components This diagram shows the major components of Oracle WebLogic Application Server Each WebLogic deployment has a WebLogic Domain which typically contains multiple WebLogic Server instances A WebLogic domain is the basic unit of administration for WebLogic Server instances : it is a group of logically related WebLogic Server resources For example you can have one WebLogic domain for each application There are two types of WebLogic Server instances in a domain : a single Administration Server and one or more Managed S ervers Each WebLogic Server instance runs its own Java Virtual Machine (JVM) and can be configured individually You deploy and run your web applications EJBs and other resources on the Managed S erver instances T he Administration S erver is used ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 7 to configur e manage and monitor the resources in the domain including the Managed Server instances WebLogic Server instances referred to as WebLogic Server Machines can run on physical or virtual servers ( such as Amazon EC2) or in conta iners The Node Manager is a utility used to start stop or restart the Administration server or Managed Server instances You can create a group of multiple WebLogic Managed Servers which is known as a WebLogic cluster WebLogic clusters support load ba lancing and failover and are required for high a vailability and scalability of your production deployments You should deploy your WebLogic cluster across multiple WebLogic Machines so that the loss of a single WebLogic Machine does not impact the availabi lity of your application ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 8 Oracle WebLogic Architecture on AWS This reference architecture diagram shows how you can deploy a web application on Oracle WebLogic on AWS This is a basic combined tier architecture with static HTTP pages servlets and EJBs that are deployed together in a single WebLogic cluster You can also deploy the static HTTP pages and servlets to a separate WebLogic cluster and the EJBs to another WebLogic cluster For more information about WebLogic architectural patterns see the Oracle WebLogic Server documentation This reference architecture includes a WebLogic domain with one Administrative Server and multiple Managed Servers These Managed Servers are part of a WebLogic cluster and are deployed on EC2 instances (WebLogic Machines) across two Availability Zones for high availability The application is deployed to the Managed Servers in the cluster that spans the two Availability Zones Amazon EFS is used for shared storage ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 9 AWS Availability Zones The AWS Cloud infrastructure is built around AWS Regions and Availability Zones AWS Regions provide multiple physically se parated and isolated Availability Zones which are connected with low latency high throughput and highly redundant networking Availability Zones consist of one or more discrete data centers each with redundant power networking and connectivity and housed in separate facilities as shown in the following diagram These Availability Zones enable you to operate production applications and databases that are more highly available fault tolerant and scalable than is possible from a single data center You can deploy your application on EC2 instances across multiple zones In the unlikely event of failure of one Availability Zone user requests are routed to your application instances in the second zone This ensures that your application continues to rem ain available at all times Traffic Distribution and Load Balancing Amazon Route 53 DNS is used to direct users to your application deployed on Oracle WebLogic on AWS Elastic Load Balancing (ELB) is used to distribute incoming requests across the WebLogic Managed Servers deployed on Amazon EC2 instances in multiple Availability Zones The load balancer serves as a single point of contact for client requests which enables you to increase the availability of your application You can add and remove WebLogic Managed Server instances from your load balancer as your needs change either manually or with Auto Scaling without disrupting the overall flow of information ELB ensures that only healthy ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 10 instances receive traffic by detecting unhealthy instances and rerouting traffic across the re maining healthy instances If an instance fails ELB automatically reroutes the traffic to the remai ning running instances If a fai led instance is restored ELB restores the traffic to that instance Use Multiple Availability Zones for High Availability Each Availability Zone is isolated from other Availability Zones and runs on its own physically distinct independent infrastructure The likelihood of two Availability Zones experiencing a failure at the same time is relatively small To ensure high availability of your application you can deploy your WebLogic Managed Server instances across multiple Availability Zones You then deploy your application on the Managed Servers in the WebLogic cluster which spans two Availability Zones In the unlikely event of an Availability Zone failure user requests to the zone with the failure are routed by Elastic Load Balancing to t he Managed Servers deployed in the second Availability Zone This ensures that your application continues to remain available regardless of a zone failure You can configure WebLogic to replicate the HTTP session state in memory to another Managed Server in the WebLogic cluster WebLogic tracks the location of the Managed Server s hosting the primary and the replica of the session state using a cookie If the Managed Server hosting the primary copy of the session state fails WebLogic can retrieve th e HTTP session state from the replica For more information about HTTP session state replication see the Oracle WebLogic documentation For shared storage you can use Amazon EFS which is designed to be highly available and durable Your data in Amazon EFS is redundantly stored across multiple Availability Zones which means that your data is available if there is an Availability Zone failure For information a bout how to use Amazon EFS for shared storage see the Shared Storage section Administration Server High Availability The Administration Server is used to configure manage and monitor the resources in the domain including the Managed Server instances Because the failure of the Administration Server does not affect the functioning of the Managed Servers in the domain the Managed Servers continue to run and you r ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 11 application is still available However if the Administration Server fails the WebLogic administration console is unavailable and you cannot make changes to the domain configuration If the underlying host for the Administration Server experiences a failure you can use the Amazon EC2 Auto Recovery feature to recover the failed server instances When using Amazon EC2 Auto Recovery several system status checks monitor the instance and the other components that need to be running for your instance to function as expected Among other th ings the system status checks look for loss of network connectivity loss of system power software issues on the physical host and hardware issues on the physical host If a system status check of the underlying hardware fails the instance will be rebo oted (on new hardware if necessary) but will retain its instance ID IP address Elastic IP addresses EBS volume attachments and other configuration details Another option is to put the Administration Server instances in an Auto Scaling group that spans multiple Availability Zones and set the minimum and maximum size of the group to one Auto Scaling ensures that an instance of the Administration Server is running in the selected Availability Zones This solution ensures high availability of the Adminis tration Server if a zone failure occurs Storage If you use file based persistence you must have storage for the WebLogic product binaries common files and scripts the domain configuration files logs and persistence stores for JMS and JTA You can either use shared storage or Amazon EBS volumes to store these files Shared Storage To store the shared files related to your WebLogic deployment you can use Amazon EFS which supports NFSv4 and will be mounted by all the instances that are part of the WebL ogic cluster In the reference architecture we use Amazon EFS for shared storage The WebLogic product binaries common files and scripts the domain configuration files and logs are stored in Amazon EFS which includes the commons domains middleware and logs file systems This table describes each of these file systems ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 12 File System Description commons For common files such as installation files response files and scripts domains For WebLogic Domain files such as configuration runtime and temporary files middleware For binaries such as Java VM and Oracle WebLogic i nstallation logs For log files Amazon EFS has two throughput modes for your file system : Bursting Throughput and Provisioned Throughput With Bursting Throughput mode throughput on Amazon EFS scales as your file system grows With Provisioned Throughput mode you can instantly provision the throughput of your file system in MiB/s independent of the amount of data stored For better performance we recommend you select Provisioned Throughput mode while using Amazon EFS With Provisioned Throughput mode you can provision up to 1024 MiB/s of throughput for your file system You can change the file system throughput in Provisioned Throughput mode at any time after you create the file system If you are deploying your application in a region where Amazon EFS is not yet available t here are several third party products by vendors such a s NetApp and SoftNAS available on the AWS Marketplace that offer a shared storage solution on AWS Amazon EBS Volumes In this reference architecture we use Am azon EFS for shared storage You can also deploy Oracle WebLogic on AWS without using shared storage Instead you can use Amazon EBS volumes attached to your Amazon EC2 instances for storage Make sure to select the General Purpose (gp2) volume type for s toring the WebLogic product binaries common files and scripts the domain configuration files and logs GP2 volumes a re backed by solid state drives (SSDs) designed to offer single digit millisecond latencies and are suitable for use with Oracle WebLogic ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 13 Scalability When you use AWS you can scale your application easily because of the elastic nature of the cloud You can scale your application vertically and horizontally Vertical Scaling You can vertically scale or scale up your application simply by changing the EC2 instance type on which your WebLogic Managed Servers are deployed to a larger instance type and then increasing the WebLogic JVM heap size You can modify the Java heap size with the Xms (initial heap size ) and Xmx (maximum heap size ) parameters Ideally you should set both the initial heap size ( Xms) and the maximum heap size ( Xmx) to the same value to minimize garbage collections and optimize performance For example you can start with an r4large instance with 2 vCPUs and 15 GiB RAM and scale up all the way to an x1e32xlarge instance with 128 vCPUs and 3904 GiB RAM For the most updated list of Amazon EC2 instance types see the Amazon EC2 Instance Ty pes page on the AWS website After you select a new instance type you simply restart the instance for the changes to take effect Typically the resizing operation is completed in a few minutes the Amazon EBS volumes remain attached to the instances and no data migration is required Horizontal Scaling You can horizontally scale or scale out your application by adding more Managed Servers to your WebLogic cluster depending on the user traffic or on a particular schedule You l aunch new EC2 instance s to deploy and configure additional Managed Servers add them to the WebLogic cluster and register your instance s with the ELB You can automate this process with AWS Auto Scaling and WebLogic scripting For more information see the Auto Scaling your Oracle WebLogic Cluster section AWS Auto Scaling for scaling out your WebLogic cluster also requires scripting which can be an additional technical investment While we recommend that you use AWS Au to Scaling sometimes you might not have the time or the technical resources to implement it while migrating your WebLogic application to AWS A simpler alternative might be to use standby instances ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 14 Standby Instances To meet extra capacity requirements a dditional instances of the WebLogic Managed Servers are preinstalled and configured on EC2 instances These standby instances can be shut down until the extra capacity is required You do not incur compute charges when instances are shut down you incur only Amazon Elastic Block Store (Amazon EBS) storage charges These preinstalled standby instances provide you the flexibility to meet additional capacity when you need it ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 15 Auto Scaling your Oracle WebLogic Cluster You can use AWS Auto Scaling to horizontally scale your applications based on demand This helps you to maintain steady predictable performance at the lowest possible cost For example you can configure AWS Auto Scaling to automatically create and add more Managed Servers to your WebLogic cluster as the traffic increases and to stop and remove Managed Servers from the WebLogic cluster as the traffic decreases For more information about Auto Scaling see the Amazon EC2 Auto Scaling documentation This diagram shows how AWS Auto Scaling works with Oracle WebLogic In this example we use Amazon EFS for shared storage ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 16 To Auto Scale your WebLogic cluster on AWS you must complete these major steps 1 Install and c onfigure WebLogic – The first step is to configure Amazon EFS for shared storage install Oracle WebLogic and configure the WebLogic Domain and the WebLogic clus ter Amazon EFS is used to store the WebLogic product binaries common files and scripts the domain configuration files and logs 2 Configure AWS Auto Scaling – Next you have to configure AWS Auto Scaling to launch and terminate EC2 instances —or WebLogic Machines —based on the application workload 3 Configure WebLogic scaling scripts – Finally you c reate WebLogic Scripting Tool (WLST) scripts These scripts create and add or remove the Managed Servers from the WebLogic cluster when AWS Auto Scaling launches or terminates EC2 instances in the auto scaling group Configure Oracle WebLogic To configure Oracle WebLogic and setup shared storage you must complete these high level steps 1 Create the commons domains middleware and logs file systems on Amazon EFS as described in the Shared Storage section 2 Create an EC2 instance for deploying the WebLogic Administration Server and mount the EFS file systems In the reference architecture we have created the following direc tory structure to store the WebLogic binaries domain configurations common scripts and logs ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 17 3 Install Oracle WebLogic The ORACLE_HOME directory should be located on a shared folder (/middleware) on EFS 4 Create the WebLogic domain You can use the Basic WebLogic Server Domain Template in the /templates/wls/wlsjar' directory to create the domain 5 Create a WebLogic cluster in the domain and set the cluster messaging mode to Unicast Config ure AWS Auto Scali ng To configure AWS Auto Scaling to launch and terminate EC2 instances (or WebLogic Machines ) based on the application load you must complete the following high level steps For more details on Auto Scaling see the Amazon EC2 Auto Scaling documentation on the AWS website 1 Create a launch configuration and an Auto Scaling group 2 Create the scale in and scale out policies For example you can create a scaling policy to add an instance when the CPU utilization is >80 % and to remove an instance when the CPU utilization is <60 % 3 If you are using inmemory session persistence Oracle WebLogic replicates the session data to another Manage d Server in the cluster You should ensure that the Auto Scaling s cale down process terminate s only one Managed Server at a time to make sure you do not destroy the master and the replica of the session at the same time For detailed step bystep instruc tions on how to configure Auto Scaling see the Amazon EC2 Auto Scaling documentation on the AWS website Configure WebLogic Scaling Scripts Based on the traffic to your application Auto Scaling can create and add new EC2 instances (scaling out) or remove existing EC2 instances (scaling in) from your auto scaling group You must create the following scripts to automate the configuration of WebLogic in an auto scaled environment • EC2 configuration scripts – These script s mount the EFS filesystems invoke the WLST scripts to configure and start the WebLogic Managed Server on the start up of the EC2 instance and invoke the WLST scripts to stop the WebLogic Managed Server on shutdown of the EC2 instance ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 18 You can pass this script with the EC2 user data For detailed information see the Amazon EC2 documentation on t he AWS website • WebLogic Scripting Tool (WLST ) scripts – WLST is a command line scripting interface used to manage WebLogic Server instances and domains These scripts create and add the Manage d Server to your WebLogic cluster when Auto Scaling adds a new EC2 instance to the Auto Scaling group These scripts also stop and remove the Managed Server from your WebLogic cluster when Auto Scaling removes the EC2 instance from the Auto Scaling group For more information see the Oracle WLST documentation ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 19 Monitoring your Infrastructure After you migrate your Oracle WebLogic application s to AWS you can continue to use the monitoring tools you are familiar with to monitor your Oracle WebLogic environment and the application you deployed on WebLogic You can use Fusion Middleware Control the Oracle WebLogic Server Administration Console or the command line (using the WSLT state command) to monitor your Oracle WebLogic infrastructure components This includes WebLogic domains Managed Servers and clusters You can also monitor the Java applications deployed and get information such as the state of your application the number of active sessions and response times For more information about how to monitor Oracle WebLogic see the Oracle WebLogic documentation You can also use Amazon CloudWatch to monitor AWS Cloud resources and the applications you run on AWS Amazon CloudWatch enables you to monitor your AWS resources in near real time including Amazon EC2 instances Amazon EBS volumes Amazon EF S ELB load balancers and Amazon RDS DB instances Metrics such as CPU utilization latency and request counts are provided automatically for these AWS resources You can also supply your own logs or custom application and system metrics such as memory usage transaction volumes or error rates which Amazon CloudWatch will also monitor With Amazon CloudWatch alarms you can set a threshold on metrics and trigger an action when that threshold is exceeded For example you can create an alarm that is tri ggered when the CPU utilization on an EC2 instance crosses a threshold You can also configure a notification of the event to be sent through SMS or email Real time alarm s for metrics and events enable you to minimize downtime and potential business impact If your application uses a database deployed on Amazon RDS y ou can use the Enhanced Monitoring feature of Amazon RDS to monitor your database Enhanced Monitoring gives you access to over 50 metrics including CPU memory file system and disk I/O You can also view the processes running on the DB instance and their related metrics including percentage of CPU usage and memory usage ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 20 AWS Security and Compliance The AWS Cloud security infrastructure has been architected to be one of the most flexible and secure cloud computing environments available today Security on AWS is very similar to security in your on premises data center but without the costs and complexities invol ved in protecting facilities and hardware AWS provides a secure global infrastructure plus a range of features that you can use to help secure your systems and data in the cloud To learn more about AWS Security see the AWS Security Center AWS Compliance enables customers to understand the robust controls in place at AWS to maintain security and data protection in the cloud AWS engages with external certifying bodies and independent auditors to provide c ustomers with extensive information regarding the policies processes and controls established and operated by AWS To learn more about AWS Compliance see the AWS Compliance Center The AWS Security Mode l The AWS infrastructure has been architected to provide an extremely scalable highly reliable platform that enables you to deploy applications and data quickly and securely Security in the cloud is different than security in your on premises data center s When you move computer systems and data to the cloud security responsibilities become shared between you and your cloud service provider In the AWS cloud model AWS is responsible for securing the underlying infrastructure that supports the cloud and you are responsible for securing workloads that you deploy in AWS This shared security responsibility model can reduce your operational burden in many ways and gives you the flexibility you need to implement the most applicable security controls for you r business functions in the AWS environment ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 21 Figure 6: The AWS shared responsibility model When you deploy Oracle WebLogic applications on AWS we recommend that you take advantage of the various security features of AWS such as AWS Identity and Access Management monitoring and logging network security and data encryption AWS Identity and Access Management With AWS Identity and Access Management (IAM) you can centrally manage your users and their security credentials such as passwords access keys and permissions policies which control the AWS services and resources that users can access IAM supports multifactor authentication (MFA) for privileged accounts including options for hardware based authenticators and support for integration and federation with corporate directories to reduce administrative overhead and improve end user experience Monitoring and Logging AWS CloudTrail is a service that records AWS API calls for your account and delivers log files to you The recorded information in the log files includes the identity of the API caller the time of the API call the source IP address of the API caller the request parameters and the response elements returned by the AWS service This provides deep visibility into API calls including who what when and from where calls were made The AWS API call history produced by ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 22 CloudTrail enables security analysis resource change tracking and compliance auditing Network Security and Amazon Virtual Private Cloud In each Amazon Virtual Private Cloud (VPC) you create one or more subnets Each instance you launch in your VPC is connected to one subnet Traditional layer 2 security attacks including MAC spoofing and ARP spoofing are blocked You can configure network ACLs which are stateless traffic filters that apply to all inbound or outbound traffic from a subnet within your VPC These ACLs can contain ordered r ules to allow or deny traffic based on IP protocol by service port and by source and destination IP address Security groups are a complete firewall solution that enable filtering on both ingress and egress traffic from an instance Traffic can be restri cted by any IP protocol by service port as well as source and destination IP address (individual IP address or classless inter domain routing (CIDR) block) Data Encryption AWS offers you the ability to add a layer of security to your data at rest in the cloud by providing scalable and efficient encryption features Data encryption capabilities are available in AWS storage and database services such as Amazon EBS Amazon S3 Amazon Glacier Amazon RDS for Oracle Amazon RDS for SQL Server and Amazon Re dshift Flexible key management options allow you to choose whether to have AWS manage the encryption keys using the AWS Key Management Service o (AWS KMS) or to maintain complete control over your keys Dedicated hardware based cryptographic key storage options (AWS CloudHSM) are available to help you satisfy compliance requirements For more information see the Introduction to AWS Security and AWS Security Best Practices whitepapers ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 23 Oracle WebLogic on AWS Use Cases Oracle WebLogic customers use AWS for a variety of use cases including these environments: • Migration of existing Oracle WebLogic production environments • Implementation of new Oracle WebLogic production environments • Implementing disaster recovery environments • Running Oracle WebLogic development test demonstration proof of concept (POC) and t raining environments • Temporary environments for migrations and testing upgrades • Temporary environments for performance testing ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 24 Conclusion AWS can be an extremely cost effective secure scalable high perform ing and flexible option for deploying Oracle WebLogic applications By deploying Oracle WebLogic applications on the AWS Cloud you can reduce costs and simultaneously enable capabilities that might not be possible or cost effective if you deployed your application in an on premises data center Some of the benefits of deploying Oracle WebLogic on AWS include: • Low cost – Resources are billed by the hour and only for the duration they are used • Eliminate the need for large capital outlays – Replace large upfront expenses with low variable payments that only apply to what you use • High availability – Achieve high availability by deploying Oracle WebLogic in a Multi AZ configuration • Flexibility –Add compute capacity elastically to cope with demand • Testing – Add test environments use them for short durations and pay only for the duration they are used ArchivedAmazon Web Services – Oracle WebLogic 12c on AWS Page 25 Contributors The following individuals and organizations contributed to this document: Ashok Sundaram Solutions Architect Amazon Web Services Document Revisions Date Description December 2018 First publication
General
Machine_Learning_Foundations_Evolution_of_Machine_Learning_and_Artificial_Intelligence
ArchivedMachine Learning Foundations Evolution of Machine Learning and Artificial Intelligence February 2019 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: awsamazoncom/whitepapersArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents AWS’s current product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS’s prod ucts or services are provided “as is” without warranties representations or conditions of any kind whether express or implied AWS’s responsibilities and liabilities to its customers are controlled by AWS agreements and this document is not part of no r does it modify any agreement between AWS and its customers © 201 9 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Introduction 1 Evolution of Artificial Intelligence 1 Symbolic Artificial Intelligence 1 Rise of Machine Learning 5 AI has a New Foundation 6 AWS and Machine Learning 9 AWS Machine Learning Services for Builders 9 AWS Machine Learning Services for Custom ML Models 12 Aspiring Developers Framework 13 ML Engines and Frameworks 13 ML Model Training and Deployment Support 14 Conclusions 15 Contributors 15 Further Reading 16 Document Revisions 16 ArchivedAbstract Artificial Intelligence (AI) and Machine Learning (ML) are terms of interest to business people technicians and researchers around the world Most descriptions of the terms oversimplify their true relationship This paper provides a foundation for understanding artificial intelligence describes how AI is now based on a foundation o f machine learning and provides an overview of AWS machine learning services ArchivedAmazon Web Services Machine Learning Foundations Page 1 Introduction Most articles that discuss the relationship between artificial intelligence (AI) and machine learning (ML) focus on the fact that ML is a do main or area of study within AI Although that is true historically an even stronger relationship exists —that successful artificial intelligence applications are almost all implemented using a foundation of ML techniques Instead of a component machine l earning has become the basis of modern AI To support this theory we review how AI systems and applications worked in the first three decades versus how they work today We begin with an overview of AI’s original structure and approach describe the rise of machine learning as its own discipline show how ML provides the foundation for modern AI review how AWS supports customers using machine learning We conclude with observations about why AI and ML are not as easily distinguished as they might first ap pear Evolution of Artificial Intelligence Symbolic Artificial Intelligence Artificial Intelligence as a branch of computer science began in the 1950s Its two main goals were to 1) study human intelligence by modeling and simulating it on a computer and 2) make computers more useful by solving complex problems like humans do From its inception through the 1980s most AI systems were programmed by hand usually in functional declarative or other high level languages such as LISP or Prolog Several custom languages were creat ed for specific areas (eg STRIPS for planning ) Symbols within the languages represented concepts in the real world or abstract ideas and formed the basis of most knowledge representations Although AI practitioners used standard computer science techniques such as search algorithms graph data structures and grammars a significant amount of AI programming was heuristic —using rules of thumb —rather than algorithmic due to the complexity of the probl ems Part of the difficulty of producing AI solutions then was that to make a system successful all of the conditionals rules scenarios and exceptions needed to be added programmatically to the code ArchivedAmazon Web Services Machine Learning Foundations Page 2 Artificial Intelligence Domains Researchers were inte rested in general AI or creating machines that could function as a system in a way indistinguishable from humans but due to the complexity of it most focused on solving problems in one specific domain such as perception reasoning memory speech moti on and so on Major AI domains at this time are listed in the following table Table 1: Domains in Symbolic AI (1950s to 1980s) Domain Description Problem Solving Broad general domain for solving problems making decisions sati sfying constraints and other types of reasoning Subdomains included expert or knowledge based systems planning automatic programming game playing and automated deduction Problem solving was arguably the most successful domain of symbolic AI Machine Learning Automatically generating new facts concepts or truths by rote from experience or by taking advice Natural Language Understanding and generating written human languages (eg English or Japanese) by parsing sentences and converting them into a knowledge representation such as a semantic network and then returning results as properly constructed sentences easily understood by people Speech Recognition Converting sound waves into phonemes words and ultimately sentences t o pass off to Natural Language Understanding systems and also speech synthesis to convert text responses into natural sounding speech for the user Vision Converting pixels in an image into edges regions textures and geometrical objects in order to mak e sense of a scene and ultimately recognize what exists in the field of vision Robotics Planning and controlling actuators to move or manipulate objects in the physical world Artificial Intelligence Illustrated In the following diagram lower levels depict layers that provide the tools and foundation used to build solutions in each domain For example below the Primary Domains are a sampling of the many Inferencing Mechanisms and Knowledge Representations that were commonly used at the time ArchivedAmazon Web Services Machine Learning Foundations Page 3 Figure 1: Overview of Symbolic Artificial Intelligence The Sample K nowledge Representations stored knowledge and information to be reasoned on by the system Common categories of knowledge represent ations included structured (eg frames which can be compared to objects and semantic networks which are like knowledge graphs) and logic based (eg propositional and predicate logic modal logic and grammars) The advantage of these symbolic knowledge representations over other types of models is that they are transparent explainable composable and modifiable They support many types of inferencing or reasoning mechanisms which manipulate the knowledge representations to solve problems understand sentences and provide solutions in each domain The AI Language Styles and Infrastructure layers show some types of languages and infrastructure used to develop AI systems at this time Both tended to be specialized and not easily integrated with external data or enterprise systems A Question of Chess and Telephones A question asked at the time was “which is a harder problem to solve: answering the telephone or playing chess at a master level?” The answer is counter intuitive to most people Although even children can answer a telephone properly very few people play chess at a master level However for traditional AI chess is the perfect problem It is ArchivedAmazon Web Services Machine Learning Foundations Page 4 bounded has limited well understood moves and can be solved using heuristic search of the ga me’s state space Answering a telephone on the other hand is quite difficult Doing it properly requires multiple complex skills that are difficult for symbolic AI including speech recognition and synthesis natural language processing problem solving i ntelligent information retrieval planning and potentially taking complex actions Successes of Symbolic AI Generally considered to have disappointing results at least in light of the high expectations that were set symbolic AI did have several successes as well Most of the software deemed useful was turned into algorithms and data structures used in software development today Business rule engines that are in common use were derived from AI’s expert system inference engines and shells Other common com puting concepts credited to or developed in AI labs include timesharing rapid iterative development the mouse and Graphical User Interfaces (GUIs) The list below describes some of the strengths and limitations of this approach to artificial intelligence Table 2: Strengths and Limitations of Symbolic AI Strength Limitation Simulates high level human reasoning for many problems Systems tended not to learn or acquire new knowledge or capabilities autonomously depending instead on regular developer maintenance Problem Solving domain had several successes in areas such as expert systems planning and constrain propagation Most domains including machine learning natural language speech and vision did not produce signi ficant general results Can capture and work from heuristic knowledge rather than step bystep instructions Problem Solving domain specifically expert or knowledge based systems require articulated human expertise extracted and refined using knowledge engineering techniques Encodes specific known logic easily eg enforces compliance rules Systems tended to be brittle and unpredictable at the boundaries of their scope they didn’t know what they didn’t know Straightforward to review internal data structures heuristics and algorithms Built on isolated infrastructure with little integration to external data or systems Provides explanations for answers when requested Requires more context and common sense information to resolve many real world situations ArchivedAmazon Web Services Machine Learning Foundations Page 5 Strength Limitation Does not require significant amounts of data to create Many approaches were not distributed or easily scalable though there were hardware networking and software constraints to distribution as well Requires less compute resources to develop Difficult to create and maintain systems Many tools and algorithms were incorporated into mainstream system development As research money associated with symbolic AI disappe ared many researchers and practitioners turned their attention to different and pragmatic forms of information search and retrieval data mining and diverse forms of machine learning Rise of Machine Learning From the late 1980s to the 2000s several div erse approaches to machine learning were studied including neural networks biological and evolutionary techniques and mathematical modeling The most successful results early in that period were achieved by the statistical approach to machine learning Algorithms such as linear and logistic regression classification decision trees and kernel based methods (ie Support Vector Machines ) gained popularity Later deep learning proved to be a powerful way to structure and train neural networks to solve complex problems The basic approach to training them remained similar but there were several improvements driving deep learning’s success including: • Much larger networks with many more layers • Huge data sets with thousands to millions of training exampl es • Algorithmic improvements to neural network performance generalization capability and ability to distribute training across servers • Faster hardware (such as GPUs and Tensor Cores) to handle orders of magnitude more computations which are required to train the complex network structures using large data sets Deep learning is key to solving the complex problems that symbolic AI could not One factor in the success of deep learning is its ability to formulate identify and use features discovered on its own Instead of people trying to determine what it should look for the deep learning algorithms identified the most salient features automa tically ArchivedAmazon Web Services Machine Learning Foundations Page 6 Problems that were intractable for symbolic AI —such as vision natural language understanding speech recognition and complex motion and manipulation —are now being solved often with accuracy rates nearing or surpassing human capability Today the answer to the question of which is harder for machines —answering the telephone or playing chess at a master level —is becoming harder to answer Although there is important work yet to be done machine learning has made significant progress in enabling ma chines to function more like people in many areas including directed conversations with humans Machine learning has become a branch of computer science in its own right It is key to solving specific practical artificial intelligence problems AI has a New Foundation Artificial intelligence today no longer relies primarily on symbolic knowledge representations and programmed inferencing mechanisms Instead modern AI is built on a new foundation machine learning Whether it is the models or decision tr ees of conventional mathematics based machine learning or the neural network architectures of deep learning most artificial intelligence applications today across the AI domains are based on machine learning technology This new structure for artificial intelligence is depicted in the following diagram The structure of this diagram parallels the diagram of symbolic AI in order to show how the foundation and the nature of artificial intelligence systems have changed Although some of the domains in the to p layer of the diagram remain the same —Natural Language Speech Recognition and Vision —the others have changed Instead of the broad Problem Solving category seen in Figure 1 for symbolic AI there are two more focused categories for predictions and recomm endation systems which are the dominant forms of problem solving systems developed today And in addition to more traditional robotics the domain now includes autonomous vehicles to highlight recent projects in self driving cars and drones Finally since it is now the foundation of the AI domains machine learning is no longer included in the top level domains ArchivedAmazon Web Services Machine Learning Foundations Page 7 Figure 2: Machine Learning as a foundation for Artificial Intelligence There are still many questions and challenges for machine learning The following list provides some of the strengths and limitations of artificial intelligence based on a machine learning foundation Table 3: Strengths and Limitations of ML Based AI Strength Limitation Easy to train new solutions given data and tools Experiencing hype and researchers and practitioners need to properly set expectations Large number of diverse algorithms to solve many types of problems Requires large amounts of clean potentially labeled data Solves problems in all AI domains often approaching or exceeding human level of capability Problems in data such as staleness incompleteness or adversarial injection of bad data can skew results No human expertise or complex knowledge engineering required solutions are derived from examples Some especially statistically based ML algorithms rely on manual feature engineering ArchivedAmazon Web Services Machine Learning Foundations Page 8 Strength Limitation Deep learning extracts features automatically which enables complex perception and understanding s olutions System logic is not programmed and must be learned This can lead to more subjective results such as competing levels of activation where precise answers are needed (eg specific true or false answers for compliance or verification problems) Trained ML models can be replicated and reused in ensembles or components of other solutions Selecting the best algorithm network architecture and hyperparameters is more art than science and requires iteration though tools for hyperparameter optimiza tion are now available Making predictions or producing results is often faster than traditional inferencing or algorithmic approaches Training on complex problems with large data sets requires significant time and compute resources Algorithms for trainin g ML models can be engineered to be distributed and one pass improving scalability and reducing training time It is often difficult to explain how the model derived the results by looking at its structure and results of its training Can be trained and deployed on scalable highperformance infrastructure Most algorithms solve problems in one step so no chains of reasoning or partial results are available though outputs can reflect numeric “confidence” Deployed using common mechanisms like microservices / APIs for ease of integrations with other systems An important take away from Table 2 and Table 3 is that they are somewhat complementary MLbased AI can benefit from the strengths of symbolic AI Some ML approaches inclu ding automatically learning decision trees already merge the two approaches effectively Active research continues into other means of combining the strengths of both approaches as well as many open questions Given that today’s AI is built on the new fo undation of machine learning that has long been the realm of researchers and data scientists how can we best enable people from different backgrounds in diverse organizations to leverage it? ArchivedAmazon Web Services Machine Learning Foundations Page 9 AWS and Machine Learning AWS is committed to democratizing machi ne learning Our goal is to make machine learning widely accessible to customers with different levels of training and experience and to organizations across the board AWS innovates rapidly creating services and features for customers prioritized by the ir needs Machine Learning services are no exception In the diagram below you can see how the current AWS Machine Learning services map to the other AI diagrams Figure 3: AWS Machine Learning Services AWS Machine Learning Services for Builders The first layer shows AI Services which are intended for builders creating specific solutions that require prediction recommendation natural language speech vision or other capabilities These intelligent services are created using machine learning and especially deep learning models but do not require the developer to have any knowledge of machine learning to use them Instead these capabilities come pre ArchivedAmazon Web Services Machine Learning Foundations Page 10 trained are accessible via API call and provide customers the ability to add intelligence to their applications Amazon Forecast Amazon Forecast is a fully managed service that delivers highly accurate forecasts and is based on the same technology used at Amazoncom You provide historical data plus any additional data that you believe impacts your forecasts Amazon Forecast examines the data id entifies what is meaningful and produces a forecasting model Amazon Personalize Amazon Personalize makes it easy for developers to create individualized product and content recommendations for customers u sing their applications You provide an activity stream from your application inventory of items you want to recommend and potential demographic information from your users Amazon Personalize processes and examines the data identifies what is meaningful selects the right algorithms and trains and optimizes a personalization model Amazon Lex Amazon Lex is a service for building conversational interfaces into any application using voice and text Amazon Lex pr ovides the advanced deep learning functionalities of automatic speech recognition (ASR) for converting speech to text and natural language understanding (NLU) to recognize the intent of the text to enable you to build applications with highly engaging us er experiences and lifelike conversational interactions With Amazon Lex the same deep learning technologies that power Amazon Alexa are now available to any developer enabling you to quickly and easily build sophisticated natural language conversation al bots (“ chatbots ”) Amazon Comprehend Amazon Comprehend is a natural language processing (NLP) service that uses machine learning to f ind insights and relationships in text Amazon Comprehend identifies the language of the text; extracts key phrases places people brands or events; understands how positive or negative the text is and automatically organizes a collection of text files by topic ArchivedAmazon Web Services Machine Learning Foundations Page 11 Amazon Comprehend Medical Amazon Comprehend Medical is a natural language processing service that extracts relevant medical information from unstructured text using advanced machine learni ng models You can use the extracted medical information and their relationships to build or enhance applications Amazon Translate Amazon Translate is a neural machine translation service that delivers fast high quality and affordable language translation Neural machine translation is a form of language translation automation that uses deep learning models to deliver more accurate and more natural sounding translation than traditional statistical and rule based translation algorithms Amazon Translate allows you to localize content such as websites and applications for international users and to easily translate large volumes of text efficiently Amazon Polly Amazon Polly is a service that turns text into lifelike speech allowing you to create applications that talk and build entirely new categories of speech enabled products Amazon Polly is a TexttoSpeech service that uses advanced deep learning technologies to synthesize speech that sounds like a human voice Amazon Transcribe Amazon Transcribe is an automatic speech recogn ition (ASR) service that makes it easy for developers to add speech totext capability to their applications Using the Amazon Transcribe API you can analyze audio files stored in Amazon S3 and have the service return a text file of the transcribed speech Amazon Rekognition Amazon Rekognition makes it easy to add image and video analysis to your applications You just provide an image or video to the Rekognition API and the service can identify the objec ts people text scenes and activities as well as detect any inappropriate content Amazon Rekognition also provides highly accurate facial analysis and facial recognition You can detect analyze and compare faces for a wide variety of user verificati on cataloging people counting and public safety use cases ArchivedAmazon Web Services Machine Learning Foundations Page 12 Amazon Textract Amazon Textract automatically extracts text and data from scanned documents and forms going beyond simple optical character recog nition to identify contents of fields in forms and information stored in tables AWS Machine Learning Services for Custom ML Models The ML Services layer in Figure 3 provides more access to managed services and resources used by developers data scientists researchers and other customers to create their own custom ML models Custom ML models address tasks such as inferencing and prediction recommender systems and gu iding autonomous vehicles Amazon SageMaker Amazon SageMaker is a fully managed machine learning (ML) service that enables developers and data scientists to quickly and easily build train and deploy machi ne learning models at any scale Amazon Sage Maker Ground Truth helps build training data sets quickly and accurately using an active learning model to label data combining machine learning and human interaction to make the model progressively better Sage Maker provides fully managed and pre built Jupyter notebooks to address common use cases The services come with multiple built in high performance algorithms and there is the AWS Marketplace for Machine Learning containing more than 100 additional pre trained ML models and algorithms You can also bring your own algorithms and frameworks tha t are built into a Docker container Amazon Sagemaker includes built in fully managed Reinforcement Learning (RL) algorithms RL is ideal for situations where there is not pre labeled historical data but there is an optimal outcome RL trains using rewa rds and penalties which direct the model toward the desired behavior SageMaker supports RL in multiple frameworks including TensorFlow and MXNet as well as custom developed frameworks SageMaker sets up and manages environments for training and provides hyperparameter optimization with Automatic Model Tuning to make the model as accurate as possible Sagemaker Neo allows you to deploy the same trained model to multiple platforms Using machine l earning Neo optimizes the performance and size of the model and deploys to edge devices containing the Neo runtime AWS has released the code as the open source Neo AI project on GitHub under the Apache Software License SageMaker deployments run models s pread across availability zones to deliver high performance and high availability ArchivedAmazon Web Services Machine Learning Foundations Page 13 Amazon EMR /EC2 with Spark/Spark ML Amazon EMR provides a managed Hadoop framework that makes it easy fast and costeffective t o process vast amounts of data across dynamically scalable Amazon EC2 instances You can also run other popular distributed frameworks such as Apache Spark including the Spark ML machine learning library HBase Presto and Flink in Amazon EMR and interact with data in other AWS data stores such as Amazon S3 and Amazon DynamoDB Spark and Spark ML can also be run on Amazon EC2 instances to pre process data engineer features or run machine learning models Aspiring Developers Framework In parallel w ith ML Services is the Aspiring Developers Framework layer With a focus on teaching ML technology and techniques to users this layer is not intended for production use at scale Currently the aspiring developers framework consists of two service offeri ngs AWS DeepLens AWS DeepLens helps put deep learning in the hands of developers with a fully programmable video camera tutorials code and pre trained models designed to expand deep learning skills DeepLens offers developers the opportunity to use neural networks to learn and mak e predictions through computer vision projects tutorials and real world hands on exploration with a physical device AWS DeepRacer AWS DeepRacer is a 1/18th scale race car that provides a way to get star ted with reinforcement learning (RL) AWS DeepRacer provides a means to experiment with and learn about RL by building models in Amazon SageMaker testing in the simulator and deploying an RL model into the car ML Engines and Frameworks Below the ML Platform layer is the ML Engines and Frameworks layer This layer provides direct hands on access to the most popular machine learning tools In this layer are the AWS Deep Learning AMIs that equip you with the infrastructure and tools to accelerate deep lear ning in the cloud The AMIs package together several important tools and frameworks and are pre installed with Apache MXNet TensorFlow PyTorch the Microsoft Cognitive Toolkit (CNTK) Caffe Caffe2 Theano Torch Gluon Chainer and Keras to train sophi sticated custom AI models The Deep Learning AMIs let you ArchivedAmazon Web Services Machine Learning Foundations Page 14 create managed auto scaling clusters of GPUs for large scale training or run inference on trained models with compute optimized or general purpose CPU instances ML Model Training and Deployment Support The Infrastructure & Serverless Environments layer provides the tools that support the training and deployment of machine learning models Machine learning requires a broad set of powerful compute options ranging from GPUs for compute intensive de ep learning to FPGAs for specialized hardware acceleration to high memory instances for running inference Amazon Elastic Compute Cloud (Amazon EC2) Amazon EC2 provides a wide selection of instance types optimized to fit machine learning use cases Instance types comprise varying combinations of CPU memory storage and networking capacity and give you the flexibility to choose the appropriate mix of resources whether you are training models or running inference on trained models Amazon Elastic Inference Amazon Elastic Inference allows you to attach low cost GPU powered acceleration to Amazon EC2 and Amazon Sage Maker instances for making predictions with your model Rather than attaching a full GPU which is more than required for most models Elastic Inference can provide savings of up to 75% by allowing separate configuration of the right amount of acceleration for the specific model Amazon Elastic Container Service (Amazon ECS) Amazon ECS supports running and scaling containerized applications including trained machine learning models from Amazon SageMaker and containerized Spark ML Serverless Options Serverless options remove the burden of managing specific infrastructure and allow customers to focus on deploying the ML models and other logic necessary to run their systems Some of the serverless ML deployment options provided by AWS include Amazon SageMaker model deployment AWS Fargate for containers and AWS Lambda for serverless code deployment ArchivedAmazon Web Services Machine Learning Foundations Page 15 ML at the Ed ge AWS also provides an option for pushing ML models to the edge to run locally on connected devices using Amazon Sage Maker Neo and AWS IoT Greengra ss ML Inference This allows customers to use ML models that are built and trained in the cloud and deploy and run ML inference locally on connected devices Conclusions Many people use the terms AI and ML interchangeably On the surface this seems incorrect because historically machine learning is just a domain inside of AI and AI covers a much broader set of systems Today the algorithms and models of machine learning replace traditional symbolic inferencing knowledge representations and languages Training on large data sets has replaced hand coded algorithms and heuristic approaches Problems that seemed intractable using symbolic AI methods are modeled consistently with remarkable results using this approach Machine learning has i n fact become the foundation of most modern AI systems Therefore it actually makes more sense today than ever for the terms AI and ML to be used interchangeably AWS provides several machine learning offerings ranging from pre trained ready to use servi ces to the most popular tools and frameworks for creating custom ML models Customers across industries and with varying levels of experience can add ML capabilities to improve existing systems as well as create leading edge applications in areas that we re not previously accessible Contributors Contributors to this document include : • David Bailey Cloud Infrastructure Architect Amazon Web Services • Mark Roy Solutions Architect Amazon Web Services • Denis Batalov Tech Leader ML & AI Amazon Web Services ArchivedAmazon Web Services Machine Learning Foundations Page 16 Further Reading For additional information see: • AWS Whitepapers page • AWS Machine Learning page • AWS Machine Learning Training • AWS Documentation Document Revisions Date Description February 201 9 First publication
General
Security_at_Scale_Logging_in_AWS
ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 1 of 16 Security at Scale : Lo gging in AWS How AWS CloudTrail can hel p you achiev e compliance by logging API calls and changes to resources October 2015 This paper has been archived For the latest technical content refer to: https://docsawsamazoncom/wellarchitected/latest/securitypillar/ detectionhtmlArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 2 of 16 Table of Contents Abstract 3 Introduction 3 Control Access to Log Files 4 Obtain Alerts on Log File Creation and Misconfiguration 5 Receive Alerts for Log File 5 Creation and Misconfiguration 5 Manage Changes to AWS Resources and Log Files 6 Storage of Log Files 7 Generate Customized Reporting of Log Data 7 Generate Customized Reporting of Log Data 8 Conclusion 8 Additional Resources 9 Appendix: Compliance Program Index 10 ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 3 of 16 Abstract The logging and monitoring of API calls are key components in security and operational best practices as well as requirements for industry and regulatory compliance AWS CloudTrail is a web service that records API calls to supported AWS services in your AWS account and delivers a log file to your Amazon Simple Storage Service (Amazon S3) bucket AWS CloudTrail alleviates common challenges experienced in an onpremise environment and in addition to making it easier for you to demonstrate compliance with policies or regulatory standards the service makes it easier for you to enhance your security and operational processes This paper provides an overview of common compliance requirements related to logging and details how AWS CloudTrail features can help satisfy these requirements There is no additional charge for AWS CloudTrail aside from standard charges for S3 for log storage and SNS usage for optional notification Introduction Amazon Web Services (AWS) provides a wide variety of ondemand IT resources and services that you can launch and manage with pay asyougo pricing Recording the AWS API calls and associated changes in resource configuration is a critical component of IT governance security and compliance AWS CloudTrail provides a simple solution to record AWS API calls and resource change s that helps alleviate the burden of on premises infrastructure and storage challenges by helping you to build enhanced preventative and detective security controls for your AWS environment Onpremises logging solutions require installing agents setting up configuration files and centralized log servers and building and maintaining expensive highly durable data stores to store the data AWS CloudTrail eliminates this burdensome infrastructure setup and allows you to turn on logging in as little as two clicks and get increased visibility into all API calls in your AWS account CloudTrail continuously captures API calls from multiple servers into a highly available processing pipeline To turn on CloudTrail you simply signin to the AWS Management Console navigate to the CloudTrail console and click to enable logging Learn more about services and regions available for use with AWS CloudTrail on the AWS CloudTrail website This paper was developed by taking an inventory of logging requirements across common compliance frameworks (eg ISO 27001:2005 PCI DSS v20 FedRAMP etc) and combining those into generalized controls and logging domains You may leverage this paper for a variety of usecases such as security and operational bestpractices compliance with internal policies industry standards legal regulations and more The paper is written generic ally to allow anyone to understand how AWS CloudTrail can enhance your existing logging and monitoring activities ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 4 of 16 Control Access to Log Files To maintain the integrity of your log data it is important to carefully manage access around the generation and storage your log files The ability to view or modify your log data should be restricted to authorized users A common logrelated challenge for onpremise environments is the ability to demonstrate to regulators that access to log data is restricted to authorized users This control can be timeconsuming and complicated to demonstrate effectively because most onpremise environments do not have a single logging solution or consistent logging security across all systems With AWS CloudTrail access to Amazon S3 log files is centrally controlled in AWS which allows you to easily control access to your log files and help demonstrate the integrity and confidentiality of your log data Control Access to Log Files Common logging requirements How AWS CloudTrail can he lp you achieve compliance with requirements Controls exist to prevent unauthorized access to logs AWS CloudTrail provides you the ability to restrict access to your log files You can prevent and control access to make changes to your log file data by configuring your AWS Identity and Access Management (IAM) roles and Amazon S3 bucket policies to enforce read only access to your log files Learn more Additionally you can fortify your authentication and authori zation controls by enabling AWS Multi Factor Authentication (AWS MFA) on your Amazon S3 bucket(s) that store(s) your AWS CloudTrail logs Learn more Controls exist to ensure access to log records is rolebased AWS CloudTrail provides you the ability to control user access on your log files based on detailed role based provisioning AWS Identity and Access Management (IAM) enables you to securely control access to AWS CloudTrail for your users; And using IAM r oles and Amazon S3 bucket policies you can enforce role based access to the S3 bucket that stores your AWS CloudTrail log files Learn More ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 5 of 16 Obtain Alerts on Log File Creation and Misconfiguration Nearr ealtime alerts to misconfigurations of logs detailing API calls or resource changes is critically important to effective IT governance and adherence to internal and external compliance requirements Even from an operational perspective it is imperative that logging is configured properly to give you the ability to oversee the activities of your users and resources However variability and breadth of logging infrastructure in onpremise environments has made it overwhelming to actively monitor and alert you when there are misconfigurations or changes to your logging configuration Once you enable AWS CloudTrail for your account the service will deliver log files to your S3 bucket Optionally CloudTrail will publish notifications for log file deliveries to an SNS topic so that you can take action upon delivery These alerts include the Amazon S3 bucket log file address to allow you to quickly access object metadata about the event from the source log files Moreover your AWS Management Console will alert you if your log files are misconfigured and therefore logging is no longer taking place Receive Alerts for Log File Creation and Misconfiguration Common logging requirements How AWS CloudTrail can help you achieve compliance with requirements Provide a lerts when logs are created or fail and follow organization defined actions in the event of a misconfiguration AWS CloudTrail p rovides you immediate notification related to problems with your logging configuration through your AWS Management Console Learn more Alerts related to log misconfiguration will direct users to relevant logs for additional details (and will not divulge unnecessary amount of detail) AWS CloudTrail records the Amazon S3 b ucket log file address every time a new log file is written AWS CloudTrail publishes notifications for log file creation so that customers can take near realtime action when log files are created The notification is delivered to your Amazon S3 bucket and is show n in the AWS Management Console Optionally Amazon SNS messages can be pushed to mobile devices or distributed services configured via API or the AWS Management Console The SNS message for log file creation provides the log file address which limits the information divulged to only the necessary amount while also enabling you to easily link to obtain additional event details Learn more ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 6 of 16 Manage Change s to AWS Resources and Log Files Understanding the changes made to your resources is a critical component of IT governance and security Moreover preventing changes and unauthorized access to th is log data directly impacts the integrity of your change management processes and your ability to comply with internal industry and regulatory requirements around change management A major challenge faced in onpremise environments is the ability to log resource changes or changes to logs because there are only finite resources at your disposal to monitor what feels like an infinite amount of data AWS CloudTrail allows you to track the changes that were made to an AWS resource including creation modification and deletion Additionally by reviewing the log history of API calls AWS CloudTrail helps you investigate an event to determine if unauthorized or unexpected changes occurred by reviewing who initiated them when they occurred and where they originated Optionally CloudTrail will publish notifications to an SNS topic so that you can take action upon delivery of the new log file to your Amazon S3 bucket Manage Changes to IT Resources and Log Files Common logging requirements How AWS CloudTrail can help you achieve compliance with requirements Provide log of changes to system components (includi ng creation and deletion of system level objects) AWS CloudTrail p roduces log data on system change event s to enable tracking of changes made to your AWS resources AWS CloudTrail provides visibility into any changes made to your AWS resource from its c reation to deletion by loggin g changes made using API calls via the AWS Management Console the AWS Command Line Interface (CLI) or the AWS Software Development Kits (SDKs) Learn more Controls exist to prevent modifications to logs of changes or failures associated with logs By default API call log files are encrypted using S3 Server Side Encryption (SSE) and placed into your S3 bucket Modifications to log data can be controlled through use of IAM a nd MFA to enforce read only access to your Amazon S3 bucket that stores your AWS CloudTrail log files Learn more ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 7 of 16 Storage of Log Files Industry standards and legal regulations may require that log files be stored for varying periods of time For example PCI DSS requires logs be stored for one year HIPAA requires that records be retained for at least six years and other requirements mandate longer or variable storage periods depending on the data being logged As such managing the requirements for log file storage for different data on different systems can be an administrative and technological burden Moreover storing and archiving large volumes of log data in a persistent and secure way can be a challenge for many organizations AWS CloudTrail is designed to seamlessly integrate with Amazon S3 and Amazon Glacier allowing customization of S3 buckets and lifecycle rules to suit your storage needs AWS CloudTrail provides you an indefinite expiration period on your logs so you can customize the period of time you store your logs to meet your regulators’ requirements Storage of Log Files Common logging requirements How AWS CloudTrail can help you achieve compliance with requirements Logs are st ored for at least one year For ease of log file storage y ou can configure AWS CloudTrail to aggregate your log files across all regions and/or across multiple accounts to a single S3 bucket AWS CloudTrail provides the ability to customize your log stor age period by configuring your desired expiration period(s) on log files written to your Amazon S3 bucket You control the retention policies for your CloudTrail log files You can retain log files for a time period of your choice or indefinitely By defa ult log files are stored indefinitely You can also move your log file data to Amazon Glacier for additional cost savings associated with cold storage Learn more Store logs for an organization defined period of time Store logs real time for resiliency AWS CloudTrail provides you with log file resiliency by leveraging Amazon S3 a highly durable storage infrastructure Amazon S3’s standard storage is designed for 99999999999% durability and 9999 % availability of objects over a given year Learn more Generate Customized Reporting of Log Data From an operational and security perspective API call logging provides the data and context required to analyze user behavior and understand certain events API calls and IT resource change logs can also be used to demonstrate that only authorized users have performed certain tasks in your environment in alignment with compliance requirements However given the volume and variability associated with logs from different systems it can be challenging in an onpremise environment to gain a clear understanding of the activities users have performed and the changes made to your IT resources AWS CloudTrail produces data you can use to detect abnormal behavior retrieve event activities associated with specific objects or provide a simple audit trail for your account You can evolve your current logging analytics by using the 25+ different fields in the event data that AWS CloudTrail provides to build queries and create customized reports focused on internal investigations external compliance etc AWS CloudTrail enables you to monitor API calls for specific known undesired behavior(s) and raise alarms using your log management or security incident and event management (SIEM) solutions The enriched data provided by AWS CloudTrail can accelerate your investigation time and decrease your incident response time Additionally data provided by AWS CloudTrail may enable you to perform a deep er security analysis on API calls to identify suspicious behavior and latent patterns that don’t trigger immediate alarms but which may represent a ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 8 of 16 security issue Finally AWS CloudTrail works with an extensive range of partners with ready torun solutions for security analytics and alerting Learn more about our partner solutions on the AWS CloudTrail website Generate Customized Reporting of Log Data Common logging requirements How AWS CloudTrail can help you achieve compliance with requirements Log individual user access to resources by system accessed and actions taken “Individual user access” includes access by system administrators and system operators ; “Resour ces” includes audit trail logs AWS CloudTrail provides the ability to generate comprehensive and detailed API call reports by logging activities performed by all users who access your logged AWS resources including root IAM users federated users and any users or services performing activities on behalf of users using any access method Learn more Produce logs at an organization defined frequency AWS CloudTrail p rovides the ability to use log anal ysis tools to retrieve log file data at customized frequencie s by creating logs in near realtime and generally deliver ing the log data to your Amazon S3 bucket within 15 minutes of the API call You can use the log files as an input into industry leading log management and analysis solutions to perform analytics Learn more Provide a log of when logging activity was initiated AWS CloudTrail logs all API calls including enabling and disabling AWS Clou dTrail logging This allows you to track when CloudTrail itself was turned on or off Learn more Generate logs synched to a single internal system clock to provide consistent time stamp information AWS CloudTrail p roduces log data from a single internal system clock by generating event time stamps in Coordinated Universal Time (UTC) consistent with the ISO 8601 Basic Time and date format standard Learn more Provide logs that can show if inappropriate or unusual activity has occurred AWS CloudTrail enables you to monitor API calls by recording authorization failures in your AWS account allowing you to track attempted access to restricted resources or other unusual activity Learn more Provide logs with adequate event details AWS CloudTrail delivers API calls with detailed information such as type data and time location source/origin outcome (including exceptions faults and security event information) affected resource (data system etc) and associated user AWS CloudTrail can help you identify the user time of the event IP address of the user request parameters provided by the user re sponse elements returned by the service and optional error code and error message Learn more Conclusion You can run nearly anything on AWS that you would run on onpremise: websites applications databases mobile apps email campaigns distributed data analysis media storage and private networks The services AWS provides are designed to work together so that you can build complete solutions AWS CloudTrail provides a simple solution to log user activity that helps alleviate the burden of running a complex logging system Another benefit of migrating workloads to AWS is the ability to achieve a higher level of security at scale by utilizing the many governanceenabling features offered For the same reasons that delivering infrastructure in the cloud has benefits over onpremise delivery cloudbased governance offers a lower cost of entry easier operations and improved agility by providing more visibility security control and central ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 9 of 16 automation AWS CloudTrail is one of the services you can use to achieve a high level of governance of your IT resources using AWS Addition al Resources Below are links in response to commonly asked questions related to logging in AWS:  What can I do with AWS? Learn more  How can I get started with AWS? Learn more  How can I get started with AWS CloudTrail? Learn more  Does AWS CloudTrail have a list of FAQs? Learn more  How can I achieve compliance while using AWS? Learn more  How can I prepare for an audit while using AWS? Learn more This document is provided for informational purposes only It represents AWS’s curr ent product offerings as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 10 of 16 Appendix: Compliance Program Index The information in the whitepaper above was presented by logging requirement domains For your reference the logging requirements by common compliance frameworks are listed in the table below: AWS Compliance Program Compliance Requirement Payment Card Industry (PCI) Data Secur ity Standard (DSS) Level 1 AWS is Level 1 compliant under the PCI DSS You can run applications on our PCIcompliant technology infrastructure for storing processing and transmitting credit card information in the cloud Learn more PCI 52: Ensure that all anti virus mechanisms are current actively running and generating audit logs PCI 101: Establish a process for linking all access to system components (especially access done with adm inistrative privileges such as root) to each individual user PCI 102: Implement automated audit trails for all system components to reconstruct the following events: 1021: All individual accesses to cardholder data 1022: All actions taken by any in dividual with root or administrative privileges 1023: Access to all audit trails 1024: Invalid logical access attempts 1025: Use of identification and authentication mechanisms 1026: Initialization of the audit logs 1027: Creation and deletion of system level objects PCI 103: Record at least the following audit trail entries for all system components for each event: 1031: User identification 1032: Type of event 1033: Date and time 1034: Success or failure indication 1035: Origination of the event 1036: Identity or name of affected data system component or resource PCI 1042: Time data is protected PCI 105: Secure audit trails so they cannot be altered PCI 1051: Limit viewing of audit trails to those with a job related need PCI 1052: Protect audit trail files from unauthorized modifications PCI 1053: Promptly back up audit trail files to a centralized log server or media that is difficult to alter ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 11 of 16 AWS Compliance Program Compliance Requirement Payment Card Industry (PCI) Data Security Standard (DSS) Level 1 AWS is Level 1 compliant under the PCI DSS You can run applications on our PCIcompliant technology infrastructure for storing processing and transmitting credit card information in the cloud Learn more PCI 1054: Write logs for external facing technologies onto a log server on the internal LAN PCI 1055: Use file integrity monitoring or change detection software on logs to ensure that existing log data cannot be changed without generating alerts (although new data being added should not cause an alert) PCI 106: Review logs for all system components at least daily Log reviews must include those servers that perform security functions like intrusion detection system (IDS) and aut hentication authorization and accounting protocol (AAA) servers (for example RADIUS) PCI 107: Retain audit trail history for at least one year with a minimum of three months immediately available for analysis (for example online archived or rest orable from back up) PCI 115: Deploy file integrity monitoring tools to alert personnel to unauthorized modification of critical system files configuration files or content files; and configure the software to perform critical file comparisons at lea st weekly PCI 122: Develop daily operational security procedures that are consistent with requirements in this specification (for example user account maintenance procedures and log review procedures) PCI A12d: Restrict each entity’s access and privileges to its own cardholder data environment only PCI A13: Ensure logging and audit trails are enabled and unique to each entity’s cardholder data environment and consistent with PCI DSS Requirement 10 PCI 114: Use intrusion detection system s and/or intrusion prevention systems to monitor all traffic at the perimeter of the cardholder data environment as well as at critical points inside of the cardholder data environment and alert personnel to suspected compromises Keep all intrusion dete ction and prevention engines baselines and signatures uptodate ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 12 of 16 AWS Compliance Program Compliance Requirement Payment Card Industry (PCI) Data Security Standard (DSS) Level 1 AWS is Level 1 compliant under the PCI DSS You can run applications on our PCIcompliant technology infrastructure for s toring processing and transmitting credit card information in the cloud Learn more PCI 115: Deploy file integrity monitoring tools to alert personnel to unauthorized modification of critic al system files configuration files or content files; and configure the software to perform critical file comparisons at least weekly Service Organization Controls 2 (SOC 2 ) The SOC 2 report is an attestation report that expands the evaluation of cont rols to the criteria set forth by the American Institute of Certified Public Accountants (AICPA) Trust Services Principles These principles define leading practice controls relevant to security availability processing integrity confidentiality and privacy applicable to service organizations such as AWS Learn more SOC 2 Security 32g: Procedures exist to restrict logical access to the defined system including but not limited to the fol lowing matters: Restriction of access to system con figurations superuser functionality master passwords powerful utilities and security devices (for example firewalls) SOC 2 Security 33: Procedures exist to restrict physical access to the defined system including but not limited to facilities backup media and other system components such as firewalls routers and servers SOC 2 Security 37: Procedures exist to identify report and act upon system security breaches and other incidents SOC 2 Availability 35f: Procedures exist to restrict logical access to the defined system including but not limited to the following matters: Restriction of access to system configurations superuser functionality master pass words powerful utilities and security devices (for example firewalls) SOC 2 Availability 36: Procedures exist to restrict physical access to the defined system including but not limited to facilities backup media and other sys tem components such as firewalls routers a nd servers ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 13 of 16 AWS Compliance Program Compliance Requirement SOC 2 Availability 310: Procedures exist to identify report and act upon system availability issues and related security breaches and other incidents Service Organization Controls 2 (SOC 2) The SOC 2 report is an attestation report that expands the evaluation of controls to the criteria set forth by the American Institute of Certified Public Accountants (AICPA) Trust Services Principles These principles define leading practice controls relevant to security availability processing integrity confidentiality and privacy applicable to service organizations such as AWS Learn more SOC 2 Confidentiality 33: The system procedures related to confidentiality of data processing a re consistent with the documented confidentiality policies SOC 2 Confidentiality 381: Procedures exist to restrict logical access to the system and the confidential information resources maintained in the system including but not limited to the foll owing matters: Restriction of access to system con figurations superuser functionality master passwords powerful utilities and security devices (for example firewalls) SOC 2 Confidentiality 313: Procedures exist to identify report and act upon s ystem confidentiality and security breaches and other incidents SOC 2 Confidentiality 42: There is a process to identify and address potential impairments to the entity’s ongoing ability to achieve its objectives in accordance with its system confident iality and related security policies SOC 2 Integrity 36g: Procedures exist to restrict logical access to the defined system including but not limited to the following matters: Restriction of access to system configurations superuser functionality master passwords powerful utilities and security devices (for example firewalls) SOC 2 Integrity 41: System processing integrity and security performance are periodically re viewed and compared with the defined system processing integrity and related security policies SOC 2 Integrity 42: There is a process to identify and ad dress potential impairments to the entity’s ongoing ability to achieve its objectives in accordance with its defined system processing integrity and related security policies ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 14 of 16 AWS Compliance Program Compliance Requirement International Organization for Standardization (ISO) 27001 ISO 27001 is a widely adopted global security standard that outlines the requirements for information security management systems It provides a systematic approach to managing company and custom er information that’s based on periodic risk assessments Learn more Due to copyright laws AWS cannot provide the requirement descriptions for ISO 27001 You may purchase a copy of the ISO 27001 standard online from various sources including ISOorg Federal Risk and Authorization Management Program (FedRAMP) FedRAMP is a government wide program that provides a standardized a pproach to security assessment authorization and continuous monitoring for cloud products and services up to the Moderate level Learn more FedRAMP NIST 800 53 Rev 3 AU 2: The organization: a Determines based on a risk assessment and mission/business needs that the information system must be capable of auditing the following events: [Assignment: organization defined list of auditable events]; b Coordinates the security audit function wit h other organizational entities requiring audit related information to enhance mutual support and to help guide the selection of auditable events; c Provides a rationale for why the list of auditable events are deemed to be adequate to support after thefact investigations of security incidents; and d Determines based on current threat information and ongoing assessment of risk that the following events are to be audited within the information system: [Assignment: organization defined subset of the auditable events defined in AU 2 a to be audited along with the frequency of (or situation requiring) auditing for each identified event] FedRAMP NIST 800 53 Rev 4 AU 2: The organization: a Determines that the information system must be capable of audit ing the following events: [Assignment: organization defined auditable events]; b Coordinates the security audit function with other organizational entities requiring audit related information to enhance mutual support and to help guide the selection of au ditable events; c Provides a rationale for why the auditable events are deemed to be adequate to support after the fact investigations of security incidents; and d Determines that the following events are to be audited within the information system: [Ass ignment: organization defined subset of the auditable events defined in AU 2 a to be audited along with the frequency of (or situation requiring) auditing for each identified event] FedRAMP NIST 800 53 Rev 3 AU 3: The information system produces audit records that contain sufficient information to at a minimum establish what type of event occurred when (date and time) the event occurred where the event occurred the source of the event the outcome (success or failure) of the event and the identity of any user/subject associated with the event ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 15 of 16 AWS Compliance Program Compliance Requirement FedRAMP NIST 800 53 Rev 4 AU 3: The information system produces audit records containing information that at a minimum establishes what type of event occurred when the event occurred where the event occ urred the source of the event the outcome of the event and the identity of any user or subject associated with the event FedRAMP NIST 800 53 Rev 3 AU 4: The organization allocates audit record storage capacity and configures auditing to reduce the likelihood of such capacity being exceeded FedRAMP NIST 800 53 Rev 4 AU 4: The organization allocates audit record storage capacity in accordance with [Assignment: organization defined audit record storage requirements] Federal Risk and Authorization Ma nagement Program (FedRAMP) FedRAMP is a government wide program that provides a standardized approach to security assessment authorization and continuous monitoring for cloud products and services up to the Moderate level Learn more FedRAMP NIST 800 53 Rev 3 AU 5: The information system: a Alerts designated organizational officials in the event of an audit processing failure; and b Takes the following additional actions: [Assignment: orga nization defined actions to be taken (eg shut down information system overwrite oldest audit records stop generating audit records)] FedRAMP NIST 800 53 Rev 4 AU 5: The information system: a Alerts [Assignment: organization defined personnel] in t he event of an audit processing failure; and b Takes the following additional actions: [Assignment: organization defined actions to be taken (eg shut down information system overwrite oldest audit records stop generating audit records)] FedRAMP NI ST 800 53 Rev 3 AU 6: The organization: a Reviews and analyzes information system audit records [Assignment: organization defined frequency] for indications of inappropriate or unusual activity and reports findings to designated organizational officials; and b Adjusts the level of audit review analysis and reporting within the information system when there is a change in risk to organizational operations organizational assets individuals other organizations or the Nation based on law enforcement in formation intelligence information or other credible sources of information FedRAMP NIST 800 53 Rev 3 AU 6: The organization: a Reviews and analyzes information system audit records [Assignment: organization defined frequency] for indications of [Ass ignment: organization defined inappropriate or unusual activity]; and b Reports findings to [Assignment: organization defined personnel or roles] FedRAMP NIST 800 53 Rev 3 AU 8: The information system uses internal system clocks to generate time stamps for audit records ArchivedAmazon Web Services – Security at Scale: Logging in AWS October 2015 Page 16 of 16 AWS Compliance Program Compliance Requirement FedRAMP NIST 800 53 Rev 4 AU 8: The information system: a Uses internal system clocks to generate time stamps for audit records; and b Generates time in the time stamps that can be mapped to Coordinated Universal Time (UTC) or Green wich Mean Time (GMT) and meets [Assignment: organization defined granularity of time measurement] FedRAMP NIST 800 53 Rev 3 AU 9: The information system protects audit information and audit tools from unauthorized access modification and deletion FedRAMP NIST 800 53 Rev 4 AU 9: The information system protects audit information and audit tools from unauthorized access modification and deletion Federal Risk and Authorization Management Program (FedRAMP) FedRAMP is a government wide program that pr ovides a standardized approach to security assessment authorization and continuous monitoring for cloud products and services up to the Moderate level Learn more FedRAMP NIST 800 53 Rev 3 AU 10: The information system protects against an individual fal sely denying having performed a particular action FedRAMP NIST 800 53 Rev 4 AU 10: The information system protects against an individual (or process acting on behalf of an individual) falsely denying having performed [Assignment: organization defined ac tions to be covered by non repudiation] FedRAMP NIST 800 53 Rev 3 AU 11: The organization retains audit records for [Assignment: organization defined time period consistent with records retention policy] to provide support for after thefact investigati ons of security incidents and to meet regulatory and organizational information retention requirements FedRAMP NIST 800 53 Rev 4 AU 11: The organization retains audit records for [Assignment: organization defined time period consistent with records rete ntion policy] to provide support for after thefact investigations of security incidents and to meet regulatory and organizational information retention requirements
General
Practicing_Continuous_Integration_and_Continuous_Delivery_on_AWS
This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlPracticing Continuous Integration and Continuous Delivery on AWS Accelerating Software Delivery with DevOps First Publi shed June 1 2017 Updated October 27 2021 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlContents The challenge of software delivery 1 What is continuous integration and continuous delivery/deployment? 2 Continuous integration 2 Continuous delivery and deployment 2 Continuous delivery is not continuous deployment 3 Benefits of continuous delivery 3 Implementing continuous integration and continuous del ivery 4 A pathway to continuous integration/continuous delivery 5 Teams 9 Testing stages in continuous integration and continuous delivery 10 Building the pipeline 13 Pipeline integration with AWS CodeBuild 22 Pipeline integration with Jenkins 23 Deployment methods 24 All at once (in place deployment) 26 Rolling deployment 26 Immutable and blue/green deplo yments 26 Database schema changes 27 Summary of best practices 28 Conclusion 29 Further reading 29 Contributors 30 Document revisions 30 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAbstract This paper explains the features and benefits of using continuous integration and continuous delivery (CI/CD) along with Amazon Web Services (AWS) tooling in your software development environment Continuous integration and continuous delivery are best practices and a vital part of a DevOps initiative This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 1 The challenge of software delivery Enterprises today face the challenge s of rapidly changing competitive landscapes evolving security requirements and performance scalability Enterprises must bridge the g ap between operations stability and rapid feature development Continuous integration and continuous delivery (CI/CD) are practice s that enable rapid software changes while maintaining system stability and security Amazon realized early on that the busine ss needs of delivering features for Amazoncom retail customers Amazon subsidiaries and Amazon Web Services (AWS) would require new and innovative ways of delivering software At the scale of a company like Amazon thousands of independent software teams must be able to work in parallel to deliver software quickly securely reliably and with zero tolerance for outages By learning how to deliver software at high velocity Amazon and other forward thinking organizations pioneered DevOps DevOps is a combination of cultural philosophies practices and tools that increase an organization’s ability to deliver applications and services at high velocity Using DevOps principles organizations can evolve and improve products at a faster pace than organizations that use traditional software development and infrastructure management processes This speed enables organizations to better serve their customers and compete more effective ly in the market Some of these principles such as twopizza teams and microservices/ service oriented architecture (SOA) are out of the scope of thi s whitepaper This whitepaper discuss es the CI/CD capability that Amazon has built and continuously improved CI/CD is key to delivering software features rapidly and reliably AWS now offers these CI/CD capabilities as a set of developer services: AWS CodeStar AWS CodeCommit AWS CodePipeline AWS CodeBuild AWS CodeDeploy and AWS CodeArtifact Developers and IT operations professionals practicing DevOps can use these services to rapidly safely and securely deliver software Together they help you securely store and apply version control to your application's source code You can use AWS CodeStar to rapidly orchestrate an end toend software release workflow using these services For an existing envi ronment CodePipeline has the flexibility to integrate each service independently with your existing tools These are highly available easily integrated services that can be accessed through the AWS Management Console AWS application programming interfac es (APIs ) and AWS software development toolkits ( SDKs ) like any other AWS service This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 2 What is continuous integration and continuous delivery /deployment ? This section discusses the practices of continuous integration and continuous delivery and explain s the d ifference between continuous delivery and continuous deployment Continuous integration Continuous integration (CI) is a software development practice where developers regularly merge their code changes into a central repository after which automated builds and tests are run CI most often refers to the build or integration stage of the software release process and requires both an automation component ( for example a CI or build service) and a cultural component ( for example learning to integrate frequentl y) The key goals of CI are to find and address bugs more quickly improve software quality and reduce the time it takes to validate and release new software updates Continuous integration focuses on smaller commits and smaller code changes to integrate A developer commits code at regular intervals at minimum once a day The developer pulls code from the code repository to ensure the code on the local host is merged before pushing to the build server At this stage the build server runs the various test s and either accepts or rejects the code commit The basic challenges of implementing CI include more frequent commits to the common codebase maintaining a single source code repository automating builds and automating testing Additional challenges inc lude testing in similar environments to production providing visibility of the process to the team and allowing developers to easily obtain any version of the application Continuous delivery and deployment Continuous delivery (CD) is a software developm ent practice where code changes are automatically built tested and prepared for production release It expands on continuous integration by deploying all code changes to a testing environment a production environment or both after the build stage has b een completed Continuous delivery can be fully automated with a workflow process or partially automated with manual steps at critical points When continuous delivery is properly implemented developers always have a deployment ready build artifact that h as passed through a standardized test process This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 3 With continuous deployment revisions are deployed to a production environment automatically without explicit approval from a developer making the entire software release process automated This in turn all ows for a continuous customer feedback loop early in the product lifecycle Continuous delivery is not continuous deployment One misconception about continuous delivery is that it means every change committed is applied to production immediately after passing automated tests However t he point of continuous delivery is not to apply every change to production immediately but to ensure that every change is ready to go to production Before deploying a change to production you can implement a decision process to ensure that the production deployment is authorized and audited This decision can be made by a person and then executed by the tooling Using continuous deliver y the decision to go live becomes a business decision not a technical one The technical validation happens on every commit Rolling out a change to production is not a disruptive event Deployment doesn’t require the technical team to stop working on th e next set of changes and it doesn’t need a project plan handover documentation or a maintenance window Deployment becomes a repeatable process that has been carried out and proven multiple times in testing environments Benefits of continuous deliver y CD provides numerous benefits for your software development team including automating the process improving developer productivity improving code quality and delivering updates to your customers faster Automate the software release process CD provides a method for your team to check in code that is automatically built tested and prepared for release to production so that your software delivery is efficient resilient rapid and secure Improve developer productivity CD practices help your te am’s productivity by freeing developers from manual tasks untangling complex dependencies and returning focus to delivering new features in This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 4 software Instead of integrating their code with other parts of the business and spending cycles on how to deploy this code to a platform developers can focus on coding logic that delivers the features you need Improve code quality CD can help you discover and address bugs early in the delivery process before they grow into larger problems later Your team can easil y perform additional types of code tests because the entire process has been automated With the discipline of more testing more frequently teams can iterate faster with immediate feedback on the impact of changes This enables teams to drive quality code with a high assurance of stability and security Developers will know through immediate feedback whether the new code works and whether any breaking changes or bugs were introduced Mistakes caught early on in the d evelopment process are the easiest to fix Deliver updates faster CD helps your team deliver updates to customers quickly and frequently When CI/CD is implemented the velocity of the entire team including the release of features and bug fixes is increa sed Enterprises can respond faster to market changes security challenges customer needs and cost pressures For example if a new security feature is required your team can implement CI/CD with automated testing to introduce the fix quickly and reliab ly to production systems with high confidence What used to take weeks and months can now be done in days or even hours Implementing continuous integration and continuous delivery This section discuss es the ways in which you can begin to implement a CI/C D model in your organization This whitepaper doesn’t discuss how an organization with a mature DevOps and cloud transformation model builds and uses a CI/CD pipeline To help you on your DevOps journey AWS has a number of certified DevOps Partners who can provide resources and tooling For more information on preparing for a move to the AWS Cloud refer to the AWS Building a Cloud Operating Model This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 5 A pathway to continuous integration /continuous delivery CI/CD can be pictured as a pipeline ( refer to the following figure ) where new code is submitted on one end tested over a series of stages (source build staging and production) and then published as production ready code If your organization is new to CI/CD it can approach this pipeline in an iterative fashion This means that you should start small and iterate at each stage so that you can understand and develop your code in a way that will help your organization grow CI/CD pipeline Each stage of the CI/CD pipeline is structured as a logical unit in the delivery process In addition each stage acts as a gate that vets a certain aspe ct of the code As the code progresses through the pipeline the assumption is that the quality of the code is higher in the later stages because more aspects of it continue to be verified Problems uncovered in an early stage stop the code from progressin g through the pipeline Results from the tests are immediately sent to the team and all further builds and releases are stopped if software does not pass the stage These stages are suggestions You can adapt the stages based on your business need Some s tages can be repeated for multiple types of testing security and performance Depending on the complexity of your project and the structure of your teams some stages can be repeated several times at different levels For example the end product of one team can become a dependency in the project of the next team This means that the first team’s end product is subsequently staged as an artifact in the next team’s project The presence of a CI/CD pipeline will have a large impact on maturing the capabilit ies of your organization The organization should start with small steps and not try to build a fully mature pipeline with multiple environments many testing phases and automation in all stages at the start Keep in mind that even organizations that hav e highly mature CI/CD environments still need to continuously improve their pipelines This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 6 Building a CI/CD enabled organization is a journey and there are many destinations along the way The next section discuss es a possible pathway that your organization could take starting with continuous integration through the levels of continuous delivery Continuous integration Continuous integration —source and build The first phase in the CI/CD journey is to develop maturity in continuous integration You should ma ke sure that all of the developers regularly commit their code to a central repository (such as one hosted in CodeCommit or GitHub) and merge all changes to a release branch for the application No developer should be holding code in isolation If a featur e branch is needed for a certain period of time it should be kept up to date by merging from upstream as often as possible Frequent commits and merges with complete units of work are recommended for the team to develop discipline and are encouraged by th e process A developer who merges code early and often will likely have fewer integration issues down the road You should also encourage developers to create unit tests as early as possible for their applications and to run these tests before pushing the code to the central repository Errors caught early in the software development process are the cheapest and easiest to fix When the code is pushed to a branch in a source code repository a workflow engine monitoring that branch will send a command to a builder tool to build the code and run the unit tests in a controlled environment The build process should be sized appropriately to handle all activities including pushes and tests that might happen during the commit stage for fast feedback Other qua lity checks such as unit test coverage style check and static analysis can happen at this stage as well Finally the builder tool creates one or more binary builds and other artifacts like images stylesheets and documents for the application This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 7 Conti nuous delivery : creating a staging environment Continuous delivery —staging Continuous delivery (CD) is the next phase and entails deploying the application code in a staging environment which is a replica of the production stack and running more functional tests The staging environment could be a static environment premade for testing or you could provision and configure a dynamic environment with committed infrastructure and configuration code for testing and deploying the application code Continuous delivery : creating a production environment Continuous delivery —producti on In the deployment/delivery pipeline sequence after the staging environment is the production environment which is also built using infrastructure as code (IaC) This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 8 Continuous deployment Continuous deployment The final phase in the CI/CD deployment pip eline is continuous deployment which may include full automation of the entire software release process including deployment to the production environment In a fully mature CI/CD environment the path to the production environment is fully automated whi ch allows code to be deployed with high confidence Maturity and beyond As your organization matures it will continue to develop the CI/CD model to include more of the following improvements: • More staging environments for specific performance compliance security and user interface (UI) tests • Unit tests of infrastructure and configuration code along with the application code • Integration with other systems and processes such as code review issue tracking and event notification • Integration with database schema migration (if applicable) • Additional steps for auditing and business approval Even the most mature organizations that have complex multi environment CI/CD pipelines continue to look for improvements DevOps is a journey not a destination Feedback about the pipeline is continuously collected and improvements in speed scale security and reliability are achieved as a collaboration between the different parts of the development teams This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 9 Teams AWS recommends organizing three developer teams for impleme nting a CI/CD environment: an application team an infrastructure team and a tools team ( refer to the following figure ) This organization represents a set of best practices that have been developed and applied in fast moving startups large enterprise or ganizations and in Amazon itself The teams should be no larger than groups that two pizzas can feed or about 10 12 people This follows the communication rule that meaningful conversations hit limits as group sizes increase and lines of communication mu ltiply Application infrastructure and tools teams Application team The application team creates the application Application developers own the backlog stories and unit tests and they develop features based on a specified application target This team’s organizational goal is to minimize the time these developers spend on non core application tasks In addition to having functional programming skills in the application language the application team should have platform skills and an u nderstanding of system configuration This will enable them to focus solely on developing features and hardening the application This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 10 Infrastructure team The infrastructure team writes the code that both creates and configures the infrastructure needed to run the application This team might use native AWS tools such as AWS CloudFormation or generic tools such as Chef Puppet or Ansible The infrastructure team is responsible for specifying what resources are needed and it works closely with the applic ation team The infrastructure team might consist of only one or two people for a small application The team should have skills in infrastructure provisioning methods such as AWS CloudFormation or HashiCorp Terraform The team should also develop configu ration automation skills with tools such as Chef Ansible Puppet or Salt Tools team The tools team builds and manages the CI/CD pipeline They are responsible for the infrastructure and tools that make up the pipeline They are not part of the two pizza team; however they create a tool that is used by the application and infrastructure teams in the organization The organization needs to continuously mature its tools team so that the tools team stays one step ahead of the maturing application and infrastructure teams The tools team must be skilled in building and integrating all parts of the CI/CD pipeline This includes building source control repositories workflow engines build environments testing frameworks and artifact repositories This team may choose to implement software such as AWS CodeStar AWS CodePipeline AWS CodeCommit AWS CodeDeploy AWS CodeBuild and AWS CodeArtifact along with Jenkins GitHub Artifactory TeamCity and other similar tools Some organizations might call this a DevOps team but AWS discourage s this and instead encourage s thinking of DevOps as the sum of the people processes and tools in software delivery Testing stages in continuous integration and continuous delivery The three CI/CD teams should incorporate te sting into the software development lifecycle at the different stages of the CI/CD pipeline Overall testing should start as early as possible The following testing pyrami d is a concept provided by Mike Cohn in the book Succeeding with Agile It shows the various software tes ts in relation to their cost and the speed at which they run This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 11 CI/CD testing pyramid Unit tests are on the bottom of the pyramid They are both the fastest to run and the least expensive Therefore unit tests should make up the bulk of your testing strategy A good rule of thumb is about 70 percent Unit tests should have near complete code coverage because bugs caught in this phase can be fixed quickly and cheaply Service component and integration tests are above unit tests on the pyramid These tests require detailed environments and therefore are more costly in infrastructure requirements and slower to run Performance and compliance tests are the next level They require production quality environments and are more expensive yet UI an d user acceptance tests are at the top of the pyramid and require production quality environments as well All of these tests are part of a complete strategy to assure high quality software However for speed of development emphasis is on the number of t ests and the coverage in the bottom half of the pyramid The following sections discuss the CI/CD stage s Setting up the source At the beginning of the project it’s essential to set up a source where you can store your raw code and configuration and sch ema changes In the source stage choose a source code repository such as one hosted in GitHub or AWS CodeCommit This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 12 Setting up and running builds Build automation is essential to the CI process When set ting up build automation the first task is to choose t he right build tool There are many build tools such as: • Ant Maven and Gradle for Java • Make for C/C++ • Grunt for JavaScript • Rake for Ruby The build tool that will work best for you depend s on the programming language of your project and the skill set of your team After you choose the build tool all the dependencies need to be clearly defined in the build scripts along with the build steps It’s also a best practice to version the final build artifacts which makes it e asier to deploy and to keep track of issues Building In the build stage t he build tools will take as input any change to the source code repository build the software and run the following types of tests : Unit testing – Tests a specific section of code to ensure the code does what it is expected to do The unit testing is performed by software developers during the development phase At this stage a static code analysis data flow analysis code coverage and other software verification pro cesses can be applied Static code a nalysis – This test is performed without actually executing the application after the build and unit test ing This analysis can help to find coding errors and security holes and it also can ensure conformance to coding guidelines Staging In the staging phase full environments are created that mirror the eventual production environment T he following tests are performed: Integ ration testing – Verifies the interfaces between components against software design Integration testing is an iterative process and facilitates building robust interfaces and system integrity This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 13 Component testing – Tests message passing between various components and their outcomes A key goal of this testing could be idempotency in component testing Tests can include extremely large data volumes or edge situations and abnormal inputs System testing – Tests the system end toend and verifies i f the software satisfies the business requirement This might include testing the user interface ( UI) API backend logic and end state Performance testing – Determines the responsiveness and stability of a system as it performs under a particular worklo ad Performance testing also is used to investigate measure validate or verify other quality attributes of the system such as scalability reliability and resource usage Types of performance tests might include load tests stress tests and spike tes ts Performance tests are used for benchmarking against predefined criteria Compliance testing – Checks whether the code change complies with the requirements of a nonfunctional specification and/or regulations It determines if you are implementing and m eeting the defined standards User acceptance testing – Validate s the end toend business flow This testing is executed by an end user in a staging environment and confirm s whether the system meets the requirements of the requirement specification Typically customers employ alpha and beta testing methodologies at this stage Production Finally after passing the previous tests the staging phase is repeated in a production environment In this phase a final Canary test can be completed by deploying the new code only on a small subset of servers or even one server or one AWS Region before deploying code to the entire production environment Specifics on how to safely deploy to production are covered in the Deployment Methods section The next section discusses building the pipeline to incorporate these stages and tests Building the pipeline This section discusses building the pipeline Start by establishing a pipeline with just the components needed for CI and then transition later to a continuous delivery pipeline with more components and stages This section also discusses how you can consider using AWS Lambda functions and manual approvals for large projects plan for multiple teams branches and AWS Regions This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 14 Starting with a minimum viable pipeline for continuous integration Your organization’s journey toward continuous delivery begins with a minimum viable pipeline (MVP) As discussed in Implementing continuous integration and continuous delivery teams can start with a very simple process such as implementing a pipeline that performs a code style check or a single unit test without deployment A key component is a continuou s delivery orchestration tool To help you build this pipeline Amazon developed AWS CodeStar AWS CodeStar uses AWS CodePipeline AWS CodeBuild AWS CodeCommit and AWS CodeDeploy with an integrated setup process tools templates and dashboard AWS CodeStar provides everything you need to quickly develop build and deploy applications on AWS This allows you to start releasing code faster Customers who are already fam iliar with the AWS Management Console and seek a higher level of control can manually configure their developer tools of choice and can provision individual AWS services as needed AWS CodeStar setup page This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 15 AWS CodePipeline is a CI/CD service that can be used through AWS CodeStar or through the AWS Management Console for fast and reliable application and infrastructure updates AWS CodePipeline builds tests and deploys your code every time there is a code change based on the release process models you define This enables you to rapidly and reliably deliver features and updates You can easily build out an end toend solution by using our pre built plugins for popular third party services like GitHub or by integrating your own custom plugins into any stage of your release process With AWS CodePipeline you only pay for what you use There are no upfront fees or long term commitments The steps of AWS CodeStar and AWS CodePipeline map directly to the source build staging and production CI/CD stages While continuous delivery is desirable you could start out with a simple two step pipeline that checks the source repository and performs a build action: AWS CodeStar dashboard This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 16 AWS CodePipeline source and build stages For AWS CodePipeline the source stage can accept inputs from GitHub AWS CodeCommit and Amazon Simple Storage Service ( Amazon S3) Automating the build process is a critical first step for implementing continuous delivery and m oving toward continuous deployment Eliminating human involvement in producing build artifacts removes the burden from your team minimizes errors introduced by manual packaging and allows you to start packaging consumable artifacts more often AWS CodePi peline works seamlessly with AWS CodeBuild a fully managed build service to make it easier to set up a build step within your pipeline that packages your code and runs unit tests With AWS CodeBuild you don’t need to provision manage or scale your own build servers AWS CodeBuild scales continuously and processes multiple builds concurrently so your builds are not left waiting in a queue AWS CodePipeline also integrates with build servers such as Jenkins Solano CI and TeamCity For example in the following build stage three actions (unit testing code style checks and code metrics collection) run in parallel Using AWS CodeBuild these steps can be added as new projects without any further effort in building or installing build servers to handle t he load This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 17 CodePipeline — build functionality The source and build stages shown in the figure AWS CodePipeline —source and build stages along with supporting processes and automation support your team’s transition toward a continuous integration At this level of maturity developers need to regularly pay attention to build and test results They need to grow and maintain a healthy unit test base as well This in turn bolster s the entire team’s confidence in the CI/CD pipeline and further s its adoption This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 18 AWS CodePipeline stages This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 19 Continuous delivery pipeline After the continuous integration pipeline has been implemented and supporting processes have been established your teams can start transitioning toward the continuous delivery pipeline This trans ition requires teams to automate both building and deploying applications A continuous delivery pipeline is characterized by the presence of staging and production steps where the production step is performed after a manual approval In the same manner the continuous integration pipeline was built your teams can gradually start building a continuous delivery pipeline by writing their deployment scripts Depending on the needs of an application some of the deployment steps can be abstracted by existing AWS services For example AWS CodePipeline directly integrates with AWS CodeDeploy a service that automates code deployments to Amazon EC2 instances and instances running on premises AWS OpsWorks a configuration management service th at helps you operate applications using Che f and to AWS Elastic Beanstalk a service for deploying and scaling web applications and services AWS has detailed documentation on how to implement and integrate AWS CodeDeploy with your infrastructure and pipeline After your team successfully automates the deployment of the application deployment stages can be expanded with various tests For example you can add other out ofthe box integrations with services like Ghost Inspector Runscope and others as shown in the following figure This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 20 AWS CodePipeline —code tests in deployment stages Adding Lambda actions AWS CodeStar and AWS CodePipeline support integration with AWS Lambda This integration enables implementation of a broad set of tasks such as creating custom resources in your environment integrating with third party systems (such as Slack) and performing checks on your newly deployed environment Lambda functions can be used in CI/CD pipelines to do the following tasks: • Roll out changes to your environment by applying or updating an AWS CloudFormation template • Create resources on demand in one stage of a pipeline using AWS CloudFormation and delete them in another stage • Deploy application version s with zero downtime in AWS Elastic Beanstalk with a Lambda function that swaps Canonical Name record (CNAME ) values • Deploy to Amazon EC2 Container Service (ECS) Docker instances • Back up resources before building or deploying by creating an AMI snapshot • Add integration with third party products to your pipeline such as posting messages to an Internet Relay Chate ( IRC) client Manual approvals Add a n approval action to a stage in a pipeline at the point where you want the pipeline processing to stop so that someone with the required AWS Identity and Access Management (IAM) permissions can approve or reject the action This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 21 If the action is approved the p ipeline processing resumes If the action is rejected —or if no one approves or rejects the action within seven days of the pipeline reaching the action and stopping —the result is the same as an action failing and the pipeline processing does not continue AWS CodeDeploy —manual approvals Deploying infrastructure code changes in a CI/CD pipeline AWS CodePipeline lets you select AWS CloudFormation as a deployment action in any stage of your pipeline You can then choose the specific action you would like AW S CloudFormation to perform such as creating or deleting stacks and creating or executing change sets A stack is an AWS CloudFormation concept and represents a group of related AWS resources While there are many ways of provisioning Infrastructure as Co de AWS CloudFormation is a comprehensive tool recommended by AWS as a scalable complete solution that can describe the most comprehensive set of AWS resources as code AWS recommend s using AWS CloudFormation in an AWS CodePipeline project to track infrastructure changes and tests This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 22 CI/CD for serverless applications You can also use AWS CodeStar AWS CodePipeline AWS CodeBuild and AWS CloudFormation to build CI/CD pipelines for serverless applications Serverless applications integrate managed services such as Amazon C ognito Amazon S3 and Amazon DynamoDB with event driven servi ce and AWS Lambda to deploy applications in a manner which doesn’t require managing servers If you are a serverless application developer you can use the combination of AWS CodePipeline AWS CodeBuild and AWS CloudFormation to automate the building te sting and deployment of serverless applications that are expressed in templates built with the AWS Serverless Application Model (SAM) For more information refer to the AWS Lambda documentation for Automating Deployment of Lambda based Applications You can also create secure CI/CD pipelines that follow your organization’s best practices with AWS Serverless Applicat ion Model Pipelines (AWS SAM Pipelines) AWS SAM Pipelines are a new feature of AWS SAM CLI that give you access to benefits of CI/CD in minutes such as accelerating deployment frequency shortening lead time for changes and reducing deployment errors AWS SAM Pipelines come with a set of default pipeline templates for AWS CodeBuild/CodePipeline that follow AWS deployment best practices For more information and to view the tutorial refer to the blog Introducing AWS SAM Pipelines Pipelines for multiple teams branches and AWS Regions For a large project it’s not uncommon for multiple project teams to work on different components If multiple teams use a single code repository it can be mapped so that each team has its own branch There should also be an integration or release branch for the final merge of the project If a service oriented or microservice architecture is used each team could have its own code repository In the first scenario if a single pipeline is used it’s possible that one team could affect the other teams’ progress by blocking the pipeline AWS recommend s that you crea te specific pipelines for team branches and another release pipeline for the final product delivery Pipeline integration with AWS CodeBuild AWS CodeBuild is designed to enable your organization to build a highly available build process with almost unlimi ted scale AWS CodeBuild provides quickstart environments for a number of popular languages plus the ability to run any Docker container that you specify This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 23 With the advantages of tight integration with AWS CodeCommit AWS CodePipeline and AWS CodeDeploy a s well as Git and CodePipeline Lambda actions the CodeBuild tool is highly flexible Software can be built through the inclusion of a buildspecyml file that identifies each of the build steps including pre and post build actions or specified actions through the CodeBuild tool You can view detailed history of each build using the CodeBuild dashboard Events are stored as Amazon CloudWatch Logs log files CloudWatch Logs log files in AWS CodeBuild Pipeline integration with Jenkins You can use the Jen kins build tool to create delivery pipelines These pipelines use standard jobs that define steps for implementing continuous delivery stages However this approach might not be opt imal for larger projects because the current state of the pipeline doesn’t persist between Jenkins restarts implementing manual approval is not straightforward and tracking the state of a complex pipeline can be complicated Instead AWS recommend s that you implement continuous delivery with Jenkins by using the AWS Code Pipeline plugin This plugin allows complex workflows to be described using Groovy like domain specific language and can be us ed to orchestrate This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 24 complex pipelines The AWS Code Pipeline plugin’s functionality can be enhanced by the use of satellite plugins such as the Pipeline Stage View Plugin which visualizes the current progress of stages defined in a pipeline or Pipeline Multibranch Plugin which groups builds from different branches AWS recommend s that you store your pipeline configuration in Jenkinsfile and have it checked into a source code repository This allows for tracking changes to pipeline code and becomes even more important when working with the Pipeline Multibranch Plugin AWS also reco mmend s that you divide your pipeline into stages This logically groups the pipeline steps and also enables the Pipeline Stage View Plugin to visualize the current state of the pipeline The following figure shows a sample Jenkins pipeline with four defin ed stages visualized by the Pipeline Stage View Plugin Defined stages of Jenkins pipeline visualized by the Pipeline Stage View Plugin Deployment methods You can consider multiple deployment strategies and variations for rolling out new versions of so ftware in a continuous delivery process This section discusses the most common deployment methods: all at once (deploy in place) rolling immutable and blue/green AWS indicates which of these methods are supported by AWS CodeDeploy and AWS Elastic Bean stalk The following table summarizes the characteristics of each deployment method This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 25 Table 1 Characteristics of deployment methods Method Impact of failed deployment Deploy time Zero downtime No DNS change Rollback process Code deployed to Deploy in place Downtime ☓ ✓ Redeploy Existing instances Rolling Single batch out of service Any successful batches prior to failure running new application version † ✓ ✓ Redeploy Existing instances Rolling with additional batch (beanstalk) Minimal if first batch fails otherwise similar to rolling † ✓ ✓ Redeploy New and existing instances Immutable Minimal ✓ ✓ Redeploy New instances Traffic splitting Minimal ✓ ✓ Reroute traffic and terminate new instances New instances Blue/green Minimal ✓ ☓ Switch back to old environmen t New instances † Varies depending on batch size This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 26 All at once (inplace deployment ) All at once (inplace deployment ) is a method you can use to roll out new application code to an existing fleet of servers This method replaces all the code in one deployment action It requires downtime because all servers in the fleet are updated at once There is no need to update exi sting DNS records In case of a failed deployment the only way to restore operations is to redeploy the code on all servers again In AWS Elastic Beanstalk this deployment is called all at once and is available for single and load balanced applications In AWS CodeDeploy this deployment method is called inplace d eployment with a deployment configuration of AllAtOnce Rolling deployment With rolling deployment the fleet is divided into portions so that all of the fleet isn’t upgraded at once During the deployment process two software versions new and old are running on the same fleet This method allows a zero downtime update If the deployment fails only the updated portion of the fleet will be affected A variation of the rolling deployment method called canary release involves deployment of the new sof tware version on a very small percentage of servers at first This way you can observe how the software behaves in production on a few servers while minimizing the impact of breaking changes If there is an elevated rate of errors from a canary deploymen t the software is rolled back Otherwise the percentage of servers with the new version is gradually increased AWS Elastic Beanstalk has followed the rolling deployment pattern with two deployment options rolling and rolling with additional batch These options allow the application to first scale up before taking servers out of service preserving full capability during the deployment AWS C odeDeploy accomplishes this pattern as a variation of an in place deployment with patterns like OneAtATime and HalfAtATime Immutable and blue/green deplo yment s The immutable pattern specifies a deployment of application code by starting an entirely new set of servers with a new configuration or version of application code This pattern leverages the cloud capability that new server resources are created with simple API calls The b lue/green deployment strategy is a type of immutable deployment which also requires creation of another environment Once the new environment is up and passed This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 27 all tests traffic is shifted to this new deployment Crucial ly the old environment that is the “blue” environment is kept idle in case a rollback is needed AWS Elastic Beanstalk supports immutable and blue/green deployment patterns AWS CodeDeploy also supports the blue/green pattern For more information on how AWS services accomplish these immutable patterns refer to the Blue/Green Deployments on AWS whitepaper Database schema changes It’s common for modern software to have a database layer Typically a relational database is used which stores both dat a and the structure of the data It’s often necessary to modify the database in the continuous delivery process Handling changes in a relational database requires special consideration and it offers other challenges than the ones present when deploying application binaries Usually when you upgrade an application binary you stop the application upgrade it and then start it again You don't really bother about the application state which is handled outside of the application When upgrading databases you do need to consider state because a database contains much state but comparatively little l ogic and structure The database schema before and after a change is applied should be considered different versions of the database You could use tools such as Liquibase and Flyway to manage the versions In general those tools employ some variant of the following method s: • Add a table to the database where a database version is stored • Keep track of database change commands and bunch them together in versioned change sets In the case of Liquibase these changes are stored in XML files Flyw ay employs a slightly different method where the change sets are handled as separate SQL files or occasionally as separate Java classes for more complex transitions • When Liquibase is being asked to upgrade a database it looks at the metadata table and de termines which change sets to run in order to bring the database uptodate with the latest version This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 28 Summary of best practices The following are some best practice dos and don’ts for CI/CD Do: • Treat your infrastructure as code o Use version control for you r infrastructure code o Make use of bug tracking/ticketing systems o Have peers review changes before applying them o Establish infrastructure code patterns/designs o Test infrastructure changes like code changes • Put developers into integrated teams of no mor e than 12 self sustaining members • Have all developers commit code to the main trunk frequently with no long running feature branches • Consistently adopt a build system such as Maven or Gradle across your organization and standardize builds • Have develope rs build unit tests toward 100% coverage of the code base • Ensure that unit tests are 70% of the overall testing in duration number and scope • Ensure that unit tests are up todate and not neglected Unit test failures should be fixed not bypassed • Treat your continuous delivery configuration as code • Establish role based security controls (that is who can do what and when) o Monitor/track every resource possible o Alert on services availability and response times o Capture learn and improve o Share acc ess with everyone on the team o Plan metrics and monitoring into the lifecycle This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Int egration and Continuous Delivery on AWS 29 • Keep and track standard metrics o Number of builds o Number of deployments o Average time for changes to reach production o Average time from first pipeline stage to each stage o Number of changes reaching production o Average build time • Use multiple distinct pipelines for each branch and team Don’t: • Have long running branches with large complicated merges • Have manual tests • Have manual approval processes gates code revi ews and security reviews Conclusion Continuous integration and continuous delivery provide an ideal scenario for your organization’s application teams Your developers simply push code to a repository This code will be integrated tested deployed test ed again merged with infrastructure go through security and quality reviews and be ready to deploy with extremely high confidence When CI/CD is used code quality is improved and software updates are delivered quickly and with high confidence that ther e will be no breaking changes The impact of any release can be correlated with data from production and operations It can be used for planning the next cycle too —a vital DevOps practice in your organization’s cloud transformation Further reading For mo re information on the topics discussed in this whitepaper re fer to the following AWS whitepapers: • Overview of Deployment Options on AWS This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ practicingcontinuousintegrationcontinuous delivery/welcomehtmlAmazon Web Services Practicing Continuous Integration and Continuous Delivery on AWS 30 • Blue/Green Deployments on AWS • Setting up CI/CD pipeline by in tegrating Jenkins with AWS CodeBuild and AWS CodeDeploy • Implementing Microservices on AWS • Docker on AWS: Running Containers in the Cloud Contributors The following individuals and organizations contributed to this document: • Amrish Thakkar Principal Solutions Architect AWS • David Stacy Senior Consultant DevOps AWS Professional Services • Asif Khan Solutions Architect AWS • Xiang Shen Senior Solutions Architect AWS Document revisions Date Description October 27 2021 Updated co ntent June 1 2017 First publication
General
Financial_Services_Grid_Computing_on_AWS
This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlFinancial Services Grid Computing on AWS First Published January 2015 Updated August 24 2021 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlContents Overview 1 Introduction 2 Grid computing on AWS 5 Compute and networking 6 Storage and data sharing 15 Data management and transfer 22 Operations and management 23 Task scheduling and infrastructure orchestration 26 Security and compliance 30 Migration approaches patterns and anti patterns 32 Conclusion 35 Contributors 36 Further reading 36 Glossary of terms 37 Document versions 39 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAbstract Financial services organizations rely on high performance computing (HPC) infrastructure grids to calculate risk value portfolios and provide reports to their internal control functions and external regulators The scale cost and complexity of this infrastructure is an increasing challenge Amazon Web Services (AWS) provides a number of services that enable these customers to surpass their current capabilities by delivering results quickly and at a lower cost than onpremises resources The intended audience for this paper include s grid computing managers architects and engineers within financial services o rganizations who want to improve their service It describes the key AWS services to consider some best practices and includes relevant reference architecture diagram s This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 1 Overview High performance computing (HPC) in the financial services industry is an ongoing challenge because of the pressures from everincreasing computational demand across retail commercial and investment groups combined with growing cost and capital constrai nts The traditional on premises approaches to solving these problems have evolved from centralized monolithic solutions to business aligned clusters of commodity hardware to modern multi tenant grid architectures with centralized schedulers that mana ge disparate compute capacity Regulators and large financial institutions increasing ly accept hyperscale cloud provider s which resulted in significant interest in how to best leverage new capabilities while ensuring good governance and cost controls C loud concepts such as capacity on demand and pay as you go pricing models offer new opportunities to teams who run HPC platforms Historically the challenge has been to manage a fixed set of on premises resources while maximizing utilization and minimiz ing queuing times In a cloud based model with capacity that is effectively unconstrained the focus shifts away from managing and throttling demand and towards optimizing supply With this model decisions become more granular and tailored to each customer and focus on how fast and at what cost with the ability to make adjustments as required by the business With this basica lly limitless capacity concepts such as queuing and prioritization become irrelevant as clients are able to submit calculation requests and have them ser viced immediately This also results in u pstream consumers increasingly expect ing and demand ing near instantaneous processing of their workloads at any scale Initial cloud migrations of HPC platforms are often seen as extensions or evolutions of onpremises grid implementations However forward looking institutions are experimenting with the everexpand ing ecosystem of capabilities enabled by AWS Some emerging themes i nclud e refreshing financial models to run on open source Linux based operating systems and exploring the performance benefits of the latest Arm Neoverse N1 central processing units ( CPUs ) through AWS Graviton2 Amazon SageMaker increasingly democratiz es the use of artificial intelligence/machine learning (AI/ML ) techniques and customers are looking to these tools to enable accelerated development of predictive risk models For data heavy calculations Amazon EMR offers a fully managed industry leading cloud big data platform based on standard tooling using directed acyclic graph This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 2 structures This topic is explored further in the blog post How to improve FRTB’s Internal Model Approach implementation using Apache Spark and Amazon EMR As H PC environments move to the cloud the applications that are associated with them start to migrate too Risk management systems which drive compute grids quickly become a bottleneck when the downstream HPC platform is unconstrained By migrating these applications with the compute grid the applications benefit from the elasticity that the cloud provides In turn data sources such as market and static data are sourced natively from within the cloud from the same providers that customers work with today through services such as AWS Data Exchange Many of the building blocks required for fully serverless risk management and report ing solutions already exist today within AWS with services like AWS Lambda for serverless compute and AWS Step Functions to coordinate them As financial institutions become increasingly familiar and comfortable with these services it’s likely that serverless patterns will become the predominant HPC architectures of the future Introduction In general traditional HPC systems are used to solve complex mat hematical problems that require thousands or even millions of CPU hours These system s are commonly used in academic institutions biotech and engineering firms In banking organizations HPC systems are used to quantify the risk of given trades or portfolios which enables traders to develop effective hedging strategies price trades and report positions to their internal control functions and ultimately to external regulators Insurance companies leverage HPC systems in a similar way for actuarial modeling and in support of their own regulatory requirements Unpredictable global events seasonal variation and regulatory reporting commitments contribute to a mixture of demands on HPC pla tforms This includes short latency sensitive intraday pricing tasks near real time risk measures calculated in response to changing market conditions or large overnight batch workloads and back testing to measure the efficacy of new models to historic events Combined these workloads can generate hundreds of millions of tasks per day with a significant proportion running for less than a second Because of t he regulatory landscape demand for these calculations continues to outpace the progress of Moor e’s law Regulations such as the Fundamental Review of the Trading Book (FRTB) and IFRS 17 require even more analysis with some customers estimating between 40% and 1000% increases in demand as a result In turn This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 3 financial services organizations continue to grow their grid computing platforms and increasingly wrestle with the costs associated with purchasing and managing this infrastructure The blog post How cloud increases flexibility of trading risk infrastructure for FRTB compliance explores this topic in greater detail discussing the challenges of data compute and the agility benefits achi eved by running these workloads in the cloud Risk and pricing calculations in financial services are most commonly embarrassingly parallel do not requir e communication between nodes to complete calculations and broadly benefit from horizontal scalability Because of this they are well suited to a shared nothing architectural approach in which each compute node is independent from the other For example a financial model based on the Monte Carlo method can create millions of scenarios to be divided across a large number (often hundreds or thousands) of compute nodes for calculation in parallel Each scenario reflects a different market condition based on a number of variables In general doubling the number of compute nodes allow s these tasks to be distributed more wide ly which reduces by half the overall duration of the job Access to increased compute capacity through AWS allows for additional scenarios and greater precision in the results in a given timeframe Alternatively you can use the additional capacity to complete the same calculations in less time Financial services firms typically use a thirdparty grid scheduler to coordinate the allocation of compute tasks to available capacity Grid schedulers have these features in common: • A central scheduler to coordin ate multiple clients and a large number (typically hundreds or thousands) of compute nodes The scheduler manage s the loss of any given component and reschedul es the work accordingly • Deployment tools to ensure that software binaries and relevant data are reliably distributed to compute nodes that are allocated a specific task • An engine to allow rules to be defined to ensure that certain workloads are prioritized over others in the even t that the total capacity of the grid is exhausted This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 4 • Brokers are typically employed to manage the direct allocation of tasks that are submitted by a client to the compute grid In some cases an allocated compute node make s a direct connection back to a cli ent to collect tasks to reduce latency Brokers are usually horizontally scalable and are well suited to the elasticity of cloud In some cases the client is another grid node that generat es further tasks Such multi tier recursive architectures are not uncommon but present further challenges for software engineers and HPC administrators who want to maximize utilization while managing risks such as deadlock when parent tasks are unable to yield to child tasks The key benefit of running HPC workloads on AWS is the ability to allocate large amounts of compute capacity on demand without the need to commit to the upfront and ongoing costs of a large hardware investment Capacity can be scaled minute by minute according to your needs at the time This avoi ds preprovision ing of capacity according to some estimate of future peak demand Because AWS infrastructure is charged by consumption of CPU hours it’s possible to complete the same workload in less time for the same price by simply scaling the capacit y The following figure shows two approaches to provisioning capacity In the first two CPUs are provisioned for ten hours In the second ten CPUs are provisioned for two hours In a CPU hour billing model the overall cost is the same but the latter produces results in one fifth of the time Two approaches to provisioning 20 CPU hours of capacity Developers of the analytics calculations us ed in HPC applications can use the latest CPUs graphics processing units ( GPUs ) and fieldprogrammable gate arrays ( FPGAs ) available through the many Amazon EC2 instance types This drives effici ency per core and differs from on premises grids that tend to be a mixture of infrastructure which reflect s historic procurement rather than current needs This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services G rid Computing on AWS Page 5 Diverse pricing models offer flexibility to these customers For example Amazon EC2 Spot Instances can reduce compute costs by up to 90% These instances are occasionally interrupted by AWS but HPC schedulers with a history of managing scavenged CPU resources can react to these events and reschedu le tasks accordingly This document includes several recommended approaches to building HPC systems in the cloud and highlight s AWS services that are used by financi al services organizations to help to address their compute networking storage and security requirements Grid computing on AWS A key driver for the migration of HPC workloads from onpremises environments to the cloud is flexibility AWS offers HPC teams the opportunity to build reliable and cost efficient solutions for their customers while retaining the ability to experiment and innovate as new solutions and approaches become available HPC teams that want to migrat e an existing HPC solution to the cloud or to build a new solution should review the AWS Well Architected Framework which also includes a specific Financial Services Industry Lens with a focus on how to design deploy and architect financial services industry (FSI) workloads that promo te resiliency security and operational performance in line with risk and control objectives This framework applies to any cloud deployment and seeks to ensure that systems are architected according to best practice s Additionally t he HPC specific lens document also identifie s key elements to help ensure the successful deployment and operation of HPC system s in the cloud The following secti ons include information about AWS services that are most relevant to HPC systems particular ly those that support financial services customers This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 6 A typical HPC architecture with the key components including the risk management system (RMS ) grid controller grid brokers and two compute instance pools Compute and networking AWS offers a wide range of Amazon Elastic Compute Cloud (Amazon EC2) instance types which enable you to select the configuration t hat is best suited to your needs at any given time This is a departure from the typical Bill of Materials approach which limits the configurations available on premises in favor of deployment simplicity It also offers evergreening which enables you to take advantage of the latest CPU technologies as they are released without consideration for any prior investment HPC customers in financial services should consider the following instances types : This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 7 • Amazon EC2 compute optimized instances — C class instance s are optimized for compute intensive workload s and deliver costeffective high performance at a low price per compute ratio • Amazon EC2 General purpose instances — o M class — Commonly used in HPC applications because they offer a good balance of compute memory and networking resources o Z class — Offer the highest CPU frequencies with a high memory footprint o T series — Provide a baseline level of CPU performance with the ability to burst to a higher level when required The use of these instances for HPC workloads can be attractive for some workloads ; however their variable performance profile can result in inconsistent behavi or which might be undesirable o Amazon EC2 memory optimized instances — o R class – These instances offer higher memory toCPU ratios and so may be applied to X Valuation Adjustment (XVA) calculations such as Credit Value Adjustments which typically require a dditional memory • Instances with the suffix a have AMD processors for example R5a • Instances with the suffix g have Arm based AWS Graviton2 processors for example C6g • Amazon EC2 Accelerated Computing instances use hardware accelerators or co processors to perform functions such as floating point number calculations graphics processing or data pattern matching more efficiently than is possible in software running on CPUs o P class instances are intended for g eneral purpose GPU compute applications o F class instances offer customizable hardware acceleration with field programmable gate arrays (FPGAs) The latest AWS instance s are based on the AWS Nitro System The Nitro System is collection of AWS built hardware and software components that enable high performance high availability high security and bare metal capabilities to eliminate virtualization overhead By selecti ng Nitro based instances HPC applications can expect performance levels that are indistinguishable to a baremetal system while retaining all of the benefits of an ephemeral virtual host This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 8 Table 1 – Amazon EC2 instance types that are typically used for HPC workloads Instance Type Class Description General purpose T Burstable general purpose low cost M General purpose instances Compute optimized C For compute intensive workloads Memory optimized R For memory intensive workloads X For memory intensive workloads Z High compute capacity and high memory Accelerated computing P / F General purpose GPU (P) or FPGA (F) capabilities This diverse selection of instance types helps support a wide variety of workloads with optimal hardware and promotes experimentation HPC teams can benchmark various sets of instances to optimize their scheduling strategies Quantitative developers can try new approaches with GPUs FPGAs or the latest CPU s without upfront cost s or protracted procurement processes You can immediately deploy at scale your optimal approach without the traditional hardware lifecycle considerations When you run experiments or if a subset of production workloads require s a specific instance type grid schedulers typically enable tasks to be directed to the appropriate hardware through compute resource groups x86 based Amazon EC2 instances support multithreading which enables multiple threads to run concurrently on a single CPU core Each thread is represented as a virtual CPU (vCPU) on the instance An instance has a default number of CPU cores which varies according to instance type To ensure that each vCPU is used effectively it’s important to understand the behavior of the calculations run ning in the HPC environment If all processes are single threaded a good initial strategy is to have the scheduler assign one process per vCPU on each instance However if the calculations require multithreading tuning might be required to maximize the use of vCPUs without introducing excessive CPU context switching By default x86 based Amazon EC2 instances have hyperthreading (HT) enabled You can disable HT either at boot or at runtime if the analytics perform better without it which you can establi sh through benchmarking The Disabling Intel Hyper Threading This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 9 Technology on Amazon Linux blog post has an explanation of the methods you can use to configure HT on an Amazon Linux instance You might typically tune your infrastructure to increase processor performance consistency or to reduce latency Some Amazon EC2 instances enable control of processor C states ( idle state pow er saving) and P states (optimization of voltage and CPU frequency during run) The default settings for C state and P state are tuned for maximum performance for most workloads If an application might benefit from reduced latency in exchange for lower f requencies or from more consistent performance without the benefit of Turbo Boost then changes to the C state and P state configurations might be worth considering For information about the instance types that support the adjustment and how to make thes e changes to an Amazon Linux 2 based instance see Processor State Control for Your EC2 Instance in the Amazon Elastic Compute Cloud User Guide for Linux Instanc es Another potential optimization is over subscription This approach is useful when you know processes spend time on non CPU intensive activities such as waiting on data transfers or loading binaries into memory For example if this overhead is estimat ed at 10% you might be able to schedule one additional task on the host for every 10 vCPUs to achieve higher CPU utilization and throughput There are many performance benefits of AWS Graviton processors AWS Graviton processors are custom built by AWS using 64 bit Arm Neoverse cores AWS Graviton2 processors provide up to 40% better price performance over comparable current generat ion x86 based instances for a wide variety of workloads including application servers microservices high performance computing electronic design automation gaming open source databases and in memory caches Interpreted and bytecode compiled languag es such as Python Jav a Nodejs and NET Core on Linux may run on AWS Graviton2 without modification Support for Arm architectures is also increasingly common in third party numerical libraries aiding the path to adoption Compiler selection is another consideration The use of a complier that is optimized for the target CPU architecture can yield performance improvements For example quant itative analyst s might see value in developing analytics using the Intel C++ Compiler and running on instances that support AVX512 capable CPUs The AVX 512 instruction set allows developers to run twice the number of floating point operations per second (FLOPS) per clock cycle Similarly AMD offers the AMD Optimizing C/C++ Compiler which optimizes for AMD EPYC archi tectures This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 10 In addition to the instance types and classes shown in Table 1 there are also options for procuring instances in AWS : • Amazon EC2 On Demand Instances offer capacity as required for a s long as they are needed You are only charged for the time that the instance is active These are ideal for components that benefit from elasticity and predictable availability such as brokers compute instances hosting longrunning tasks or tasks that generate further generations of tasks • Amazon EC2 Spot Instances are particularly appropriate fo r HPC compute instances because they benefit from substantial savings over the equivalent on demand cost Spot Instances can occasionally be ended by AWS when capacity is constrained but grid schedulers can typically accommodate these occasional interrupt ions • Amazon EC2 Reserved Instances provide a significant discount of up to 7 2% based on a one year or three year commitment Convertible Reserved Instances offer additional flexibility on the instance family operating system and tenancy of the reservation Relatively static hosts such as HPC grid controller nodes or data caching host s might benefit from Reserv ed Instances • Savings Plans is a flexible pricing model that also provides savings of up to 72% on your AWS compute usage regardless of instance family size operating system ( OS) tenancy or AWS Region Savings Plans offer significant discounts in exchange for a commitment to use a specific amount of compute power (measured in $/hour) for a one or three year period Just like Amazon EC2 Reserved Instances Savings Plans are ideal for long running hosts such as HPC Controller nodes It’s important to note that regardless of the procurement model selected the instances delivered by AWS are exactly the same Compute instance provisioning and management strategies Spot Instances are not suitable for workloads that are inflexible stateful fault intolerant or tightly coupled between instance nodes They are also not recommended for workloads that are intolerant of occasional periods when the target capacity is not completely available However many financial services organizations make use of Spot Instances for part of their HPC workloads A Spot Instance interruption notice is a warning that is issued two minutes before Amazon EC2 interrupts a Spot Instance You can configure your Spot Instances to be This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 11 stopped or hibernated instead of being ended when they are interrupted Amazon EC2 then automatically stops or hibernates your Spot Instances on interruption and automatically resumes the instances when capacity is available AWS enables you to minimize the impact of a Spot Instance interruption through instance rebalance recommendations and Spot Instance interruption notices An EC2 Instance rebalance recommendation is a signal that notifies you when a Spot Instance is at elevated risk of interruption The signal gives you the opportunity to proactively manage the Spot Instance in advance of the two minute Spot Instance interruption notice You can decide to rebalance your workload to new or existing Spot Instances that are not at an elevated risk of interruption AWS has made it easy for you to use this new signal by using the Capacity Rebalancing feature in EC2 Auto Scaling groups and Spot Fleet If hibernation is configured this feature operate s like closing and opening the lid on a laptop computer and saves the memory state to an Amazon Elastic Block Store (Amazon EBS) disk However this approach to managing interruptions should be used with caution because the grid scheduler might not be able to track such quiesced workloads which could result in timeouts and rescheduling tasks if the hibernated image is not reactivated quickly • Amazon EC2 Spot Fleets enable you to launch a fleet of Spot Instances that span various EC2 instance types and Availability Zones By defining the target capacity using an appropriate metric ( for example a Slot for an HPC application) the fleet source s capacity from EC2 Spot Instances at the best possible price HPC teams can define Spot Fleet strategies that use diverse instance types to make sure you have the best experience at the lowest cost • Amazon EC2 Fleet also enables you to quickly create fleets that are diversified by using EC2 On Demand Instances Reserved Instanc es and Spot Instances With this approach you can optimize your HPC capacity management plan according to the changing demands of your workloads Both EC2 Fleet and Spot Fleet integrate with Amazon Even tBridge to notify you about important Fleet events state changes and errors This enables you to automate actions in response to Fleet state changes and monitor the state of your Fleet from a central place without need ing to continuously poll Fleet APIs They both also support the Capacity Optimized allocation strategy which automatically makes the most efficient use of available spare capacity while still taking advantage of the steep discounts offered by Spot Instances This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 12 • Amazon EC2 Auto Scaling groups contain a collection of Amazon EC2 instances that are treated as a logical grouping for the purposes of automatic scaling and management An Auto Scaling group enables you to use Amazon EC2 Auto Scaling features such as health check replacements and scaling policies • Amazon EC2 launch templates contain the configuration i nformation used to launch an instance The template can define the AMI ID (Operating system image) instance type and network settings for the compute instances You can use Launch Templates with EC2 Fleet Spot Fleet or Amazon EC2 Auto Scaling and make it easier to implement and track configuration standards • Launch Template versioning can be used within the EC2 Auto Scaling Group ‘Instance Refresh’ feature to update pools of capacity while minimizing interruptions to the workload All you need to do is specify the percentage of healthy instances to keep in the group while the Auto Scaling group terminates and launches instances You can a lso specify t he warm up time which is the time period that the Auto Scaling group waits between instances that get refreshed via Instance Refresh One option to begin an HPC deployment is to use only OnDemand Instances After you understand the performance of your workloads you can develop and optimize a strategy to provision instance s using Amazon EC2 Auto Scaling Groups Amazon EC2 Fleet or Amazon EC2 Spot Fleet For example you can deploy a number of Reserved Instances or Savings Plans to host core grid services such as schedulers that are required to be available at all times You can provision OnDemand Instances during the intraday period to ensure predictable performance for synchronous pricing calculations For an overnight batch you can use large fleets of Spot Instances to provide massive volumes of capacity at a minimum cost and supplement them as necessary with OnDemand Instances to ensure predictable performance for the most timesensitive workloads The following figure shows two approaches to provisioning In each case ten vCPUs of Reserved Instance capacity remain online for the stateful scheduling components In the first case 20 further vCPUs are provisioned using On Demand Instances for ten hours to accommodate a b atch that runs for 200 vCPU hours with a tenhour SLA In the second approach the 20 vCPUs are also provisioned at the outset using On Demand Instances to provide confidence in the batch delivery but 70 vCPUs based on lowcost Spot Instances are also ad ded Because of the volume of Spot Instances the batch completes much more quickly (in about three hours) and at a significantly reduced This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 13 cost However if the Spot Instances were not available for any reason the batch would still complete on time with th e On Demand Instances provisioned AWS instance provisioning strategies One of the key benefits of deploying applications in the AWS Cloud is elasticity Amazon EC2 Auto Scaling enables HPC managers to configure Amazon EC2 instance provisioning and decommissioning events based on the real time demands of their platform The concept of ‘Instance Weightings’ allows Auto Scaling groups to start instances from a diverse pool of instance types to meet an overall capacity target for the workload Though g rids were previously provisioned based on predictions of peak demands (with periods of both constraint and idle capacity ) Amazon EC2 Auto Scaling has a rich API that enables it to be integrated with schedulers to easily manage scaling events When you remove hosts from a running cluster make sure to allow for a drain down period During this period t he targeted host stops taking on new work but is allowed to complete work in progress When y ou select n odes for removal avoid any long running tasks so that the shutdown is not delayed and you don’t lose progress on those calculations If the scheduler allows a query of total runtime of tasks in progress grouped by instance you can use this to identi fy which are the optimal candidates for removal specifically the instances with the lowest aggregate total of runtime by tasks in progress Where capacity is managed automatically Amazon EC2 Auto Scaling groups offer ‘scale in’ protection as well as configurable termination policies to allow HPC managers to minimize disruption to tasks in flight Scale in protection allows an Auto Scaling Group or an individual instance to be marked as ‘InService’ and so ineligible for termination in a ‘scale in’ event You also have the option to build custom ending policies using AWS Lambda to give more control over which instances are ended This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 14 These protections can be controlled by an API for integration with the scheduler to automate the drain down process Paradoxically adding instances to a cluster can temporarily slow the flow of tasks if those new instances need some time to reach optimal performance as binaries are loaded into memory and local caches are populated Amazon EC2 Auto Scaling groups also support warm pools A warm pool is a pool of pre initialized EC2 instances that sits alongside the Auto Scaling group Whenever your application needs to scale out the Auto Scaling group can draw on the warm pool to meet its new desired capacity The goal of a warm pool is to ensure that instances are ready to quickly start serving applicati on traffic accelerating the response to a scale out event This is known as a warm start So far this section has addressed compute instance provisioning at the host level Increasingly customers are looking to serverless solutions based on either contai ner technologies such as Amazon Elastic Container Service (Amazon ECS) Amazon Elastic Kubernetes Service (Amazon EKS) or AWS Lambda For both Amazon ECS and Amazon EKS the AWS Fargate serverless compute engine removes the need to orchestrate infrastructure capacity to support containers Fargate allocates the right amount of compute eliminating the need to choose instances and scale cluster capacity You pay only for the resources req uired to run your containers so there is no over provisioning and paying for additional servers Fargate supports both Spot Pricing for ECS and Compute Savings Plans for Amazon ECS and Amazon EKS To illustrate how Amazon EKS might be used in a high throughput compute (HTC) environment AWS has released the open source solution ‘awshtcgrid ’ This project shows how AWS technologies such as Lambda Amazon DynamoDB and Amazon Simple Queue Service (Amazon SQS ) can be combined to provide much of the functionality of a traditional HPC scheduler Note that awshtc grid is not a supported AWS service offering For customers using AWS Lambda there are no instances to be scaled ; however there is the concept of Concurrency which is the number of instances of a function which can serve requests at a time There are default Regional concurrency limits which can be increased through a request in the Support Center console Financial services firms have already built completely serverless HPC solution s based on Lambda (similar to the architecture outlined here) that support tens of million s of calculations per day In addition to considering alternative CPU architectures and accelerated computing options customers are increasingly looking at their existing dependencies on This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 15 commercial operating systems such as Microsoft Windows Such dependencies are often historical stemming from risk management systems built around spreadsheets however today th e cost premiums can be very material especially when compared to deeply discounted EC2 capacity under Amazon EC2 Spot AWS offers a variety of Linux distributions including Red Hat SUSE CentOS Debian Kali Ubuntu and Amazon Linux The latter is a sup ported and maintained Linux image provided by AWS for use on Amazon EC2 (it can also be run on premises for development and testing) It is designed to provide a stable secure and high performance run environment for applications running on Amazon EC2 I t supports the latest EC2 instance type features and includes packages that enable easy integration with AWS AWS provides ongoing security and maintenance updates to all instances running the Amazon Linux AMI and it is provided at no additional charge t o Amazon EC2 users Storage and data sharing In HPC systems there are two primary data distribution challenges The first is the distribution of binaries In financial services large and complex analytical packages are common These packages are often 1GB or more in size and often multiple versions are in use at the same time on the same HPC platform to support different businesses or back testing of new models In a constrained onpremises environment you can mitigate this challenge through relatively infrequent updates to the package and a fixed set of insta nces However in a cloud based environment instances are short lived and the number of instances can be much larger As a result multiple packages may be distributed to thousands of instances on an hourly basis as new instances are provisioned and new p ackages are deployed There are a number of possible approaches to this problem One is to maintain a build pipeline that incorporates binary packages into the Amazon Machine Images (AMIs) This means that once the machine has started it can process a workload immediately because the packages are already in place The EC2 Image Builder tool simplifies the process of building testing and deploying AMIs A limitation of this approach is that it doesn’t accommodate the deployment of new packages to running instances and it require s them to be ended and replaced to get new versions Anoth er approach is to update running instances There are two different methods for this type of update which are sometimes combined : This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 16 • Pull (or lazy) deployment — In this mode when a task reaches an instance and it depends on a package that is not in place the engine pulls it from a central store before it runs the task This approach minimizes the distribution of packages and saves on local storage because only the minimum set of packages is deployed However these benefits are at the expense of delaying tasks in an unpredictable way such as the introduction of a new instance in the middle of a latency sensitive pricing job This approach may not be acceptable if large volumes of tasks have to wait for the grid nodes to pull packages from a central store which could struggle to service very large numbers of requests for data • Push deployment — In this mode you can instruct instance engines to proactively get a specific package before they receive a task that depends on it This approach allows for rolling upgrades and ensures tasks are not delayed by a package update One challenge with this method is the possibility that new instances (which can be added at any time) might miss a push message which means you must keep a list of all currently live packages In practice a combination of these approaches is common Standard analytics packages are pushe d because they’re likely to be needed by the majority of tasks Experimental packages or incremental ‘ Delta’ releases are then pulled perhaps to a smaller set of instances It might also be necessary to purge deprecated packages especially if you deploy experimental packages In this case you can use a list of live packages to enable your compute instances to purge any packages that are not in the list and thus are not current The following figure shows a cloud native implementation of these approaches It uses a centralized package store in Amazon Simple Storage Service (Amazon S3 ) with agents that respond to messages delivered through an Amazon Simple Notification Service (Amazon SNS) topic After the package is in place on Amazon S3 notifications of new releases can be generated either by an operator or as a final step in an automated build pipeline Compute instances subscribed to an SNS topic (or to multiple topics for different applicatio ns) use these messages as a trigger to retrieve packages from Amazon S3 You can also use the same mechanism to distribute delete messages to remove packages if required This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 17 Data distribution architecture using Amazon SNS messages and S3 Object Storage The second data distribution challenge i n HPC is managing data related to the tasks being processed Typically this is bi directional with data flowing to the engines that support the processing and resul ting data passed back to the clients There are thre e common approaches for this process : • In the first approach communications are inbound (see the following figure) with all data passing through the grid scheduler along with task data This is less common because it can cause a performance bottleneck as the cluster grows An inbound data distribution approach This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 18 • In another approach tasks pass through the scheduler but the data is handled outofbound s through a shared scalable data store or an inmemory data gri d (see the following figure) The t ask data contain s a reference to the data’s location and the compute instances can retrieve it as required An outofbound s data distribution approach Finally some schedulers support a direct data transfer (DDT) approach In this model the scheduler grid broker allocates compute instances which then communicate directly with the client This architecture can work well especially with very short running tasks with little data However in a hybrid model with thousands of engines running on AWS that need to access a single onpremises client this can present challenges to on premises firewall rules or to the availability of ephemeral ports on the clie nt host This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 19 DDT (direct data transfer ) data distribution approach All of these approaches can be enhanced with caches located as close as possible to or hosted on the compute instances Such caches help to minimize the distribution of data especially if a significant ly similar set is required for many calculations Some schedulers support a form of data aware scheduling that tries to ensure that tasks that require a specific dataset are scheduled to instances that already have that dataset This cannot b e guaranteed but often provides a significant performance improvement at the cost of local memory or storage on each compute instance Though the combination of grid schedulers and distributed cache technologies used on premises can provide solutions to these challenges their capabilities vary and they are not typically engineered for a cloud deployment with highly elastic ephemeral instances You can consider t he following AWS services as potential solutions to the typical HPC data managem ent use cases This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 20 Amazon Simple Storage Service (Amazon S3) The Amazon S3 provides virtually unlimited object storage designed for 99999999999% of durability and high availability For binary packages it offers both versioning and various immutability features such as S3 Object Lock which prevents deletion or replacement of objects and has been assessed by Cohasset Associates for use in environments that are subject to SEC 17a 4 CFTC and FINRA regulations Binary immutability is a common audit requirement in regulated industries which require you to demonstrate that the binaries approved in the testing phase are identical to those used to produce reports You can include this feature in your deployment pipeline to make sure that the analytics binaries you use in production are the same as those that you validated This service also offers easy to implement encryption and granu lar access control s Some HPC architectures use checkpointing (compute instance s save a snapshot of their current state to a datastore ) to minimize the computational effort that could be lost if a node fails or is interrupted during processing For this purpose a dist ributed object store (such as Amazon S3) might be an ideal solution Because the data is likely to only be needed for the life of the batch you can use S3 life cycling rules to automatically purge these objects after a small number of days to reduce cost s Amazon Elastic File System (Amazon EFS) Amazon EFS offers shared network storage that is elastic which means it grow s and shrink s as required Thousands of Amazon EC2 instances can mount EFS volumes at the same time which enables shared access to common data such as analytics packages Amazon EFS does not currently support Windows clients Amazon FSx for Windows File Ser ver Amazon FSx for Windows File Server provides fully managed highly reliable and scalable file storage that is accessible over the open standard Server Message Block (SMB) protocol It is built on Windows Server delivering a wide range of administrative features such as user quotas end user file restores and Microsoft Active Directory integration It offers single and MultiAvailability Zone deployment options fully managed backups and encryption of data at rest and in transit This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services G rid Computing on AWS Page 21 Amazon FSx for Lustre For transient job data the Amazon FSx for Lustre service provides a highperformance file system that offers sub millisecond access to data and read /write speeds of up to hundreds of gigabytes per second with millions of IOPs Amazon FSx fo r Lustre can link to an S3 bucket which makes it easy for clients to write data objects to the bucket (including clients from an on premises system ) and have those objects available to thousands of compute nodes in the cloud (see the following figure ) FSx for Lustre is ideal for HPC workloads because it provides a file system that’s optimized for the performance and costs of high performance workloads with file system access across thousands of EC2 instances An example of an Amazon FSx for Lustre implementation Amazon Elastic Block Store (Amazon EBS) After a compute instance has binary or job data it might not be possible to keep it in memory so you might want to keep a copy on a local disk Amazon EBS offers persistent block storage volumes for Amazon EC2 instances Though the volumes for compute nodes can be relatively small (10GB can be sufficient to store a variety of binary package versions and some job data) there might be some benefit to the higher IOPS and throughput offered by the Amazon EBSprovisioned input/output operations per second ( IOPS ) solid state drives ( SSDs) These offer up to 64000 IOPS p er volume and up to 1000MB/s of throughout which can be valuable for workloads that require frequent highperformance access to these datasets This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 22 Because these volumes incur additional cost you should complete an analysis of whether they provide any add itional value over the standard general purpose volumes AWS Cloud hosted data providers AWS Data Exchange makes it easy to find subscribe to and use third party data in the cloud The catalog includes hundreds of financial services datasets from a wide variety if providers Once subscribed to a data product you can use the AWS Data Exchange API to load data directly into S3 The Bloomberg Market Data Feed (B PIPE) is a managed service providing programmatic access to Bloomberg’s complete catalog of content (all the same asset classes as the Bloomberg Terminal) Network connectivity with Blo omberg B PIPE leverages AWS PrivateLink exposing the services a s set of local IP addresses within your Amazon Virtual Private Cloud (Amazon VPC) subnet and elim inating DNS issues BPIPE services are presented via Network Load Balancers to further simplify the architecture Additionally Refinitiv’s Elektron Data Platf orm provides cost efficient access to global realtime exchange ‘over the counter ’ (OTC) and contributed data The data is also provided using AWS PrivateLink allowing simple and secure connectivity from your Virtual Private Cloud (VPC) Data managemen t and transfer Although HPC systems in financial services are typically loosely coupled with limited need for EastWest communication between compute instances there are still significant demands for North South communication bandwidth between layers in the stack A key consideration for networking is where in the stack any separation between onpremises systems and cloudbased systems occurs This is because communication within the AWS network is typically of higher bandwidth and lower cost than communication to external networks As a result any architecture that cause s hundreds or thousands of compute instances to connect to an external network —particularly if they’re requesting the same binaries or task data—would create a bottlenec k Ideally the fanout point (the point in the architecture at which large numbers of instances are introduced) is in the cloud This mean s that the larger volumes of communication stay in the AWS network with relatively few connections to on premises systems This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 23 AWS offers networking services that complement the financial services HPC systems A common starting point is to deploy AWS Direct Connect connections between customer data centers and an AWS Region through a third party point of presence (PoP) provider A Direct Connect link offers a consistent and predictable experience with speeds of up to 100Gbps You can employ multiple diverse Direct Connect links to provide highly resi lient highbandwidth connect ivity Though most HPC applications within financial services are loosely coupled this isn’t universal and there are times when network bandwidth is a significant component of overall performance The current AWS Nitro –based i nstances offer up to 100Gbps of network bandwidth for the largest instance types such as the c5n18xlarge or up to 400Gbps in the case of the p4d24xlarge instance Additionally a cluster placement group packs instances close together inside an Availability Zone This strategy enables workloads to achieve the low latency network performance necessary for tightly coupled node tonode communication that is typical of HP C applications The Elastic Fabric Adaptor service (EFA) enhances the Elastic Network Adaptor (ENA) and is specifically engineered to support tightly coupled HPC workloads which require low latency communication between instances An EFA is a virtual network device which can be attached to an Amazon EC2 instance EFA is suited to workloads using the Message Passing Interface (MPI) EFA may be worthy of consideration for some financial services workloads such as weather predictions as part of an insurance industry catastrophic event model EFA traffic that bypasses the operating system (OS bypass) is not routable so it’s limited to a single subnet As a result any peers in this network must be in the same subnet and Availability Zone which could alter resiliency strateg ies The O Sbypass capabilities of EFA are also not supported on Windows Some Amazon EC2 instance types support jumbo frames where the Network Maximum Transmission Unit ( the number of bytes per packet) is increased AWS supports MTUs of up to 9001 bytes By using f ewer packets to send the same amount of data endto end network performance is improved Operations and management HPC systems are traditionally highly decoupled and resilient to the failure of any given component with minimal disruption However HPC sy stems in financial services organizations tend to be both mission critical and limited by the capabilities of traditional approaches such as physical primary and secondary data centers In this model HPC teams ha ve to choose between having secondary infrastructure sitting mostly idle in This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 24 case of the loss of a data center or using all of the infrastructure on a daily basis but with the possibility of losing up to 50% of that capacity in a disaster event Some add a third or fourth location to reduce the impact of the loss of a site but at the cost of an increased likelihood of an outage and network inefficiencies When you move to the cloud you not only open up the availability of new services but also new approach es to solving these problems AWS operates a model with Regions and Availability Zones that are always active and offer high levels of availability By architecting HPC systems for mul tiple AWS Availability Zones financial services you can benefit from high levels of resiliency and utilization In the unlikely event of the loss of an Availability Zone additional instances can be automatically provisioned in the remaining Availability Zones to enable workloads to continue without any loss of data and only a brief interruption in service A sample HPC architecture for a MultiAZ deployment This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 25 The high level architect ure in the preceding figure shows the use of multiple Availability Zones and separate subnets for the stateful scheduler infrastructure ( including schedulers brokers data stores) and the compute instances You can base your scheduler instances on long running Reserved Instances with static IP addresses to help them c ommunicat e with onpremises infrastructure by simplifying firewall rules Conversely you can base your compute instances on On Demand Instance or Spot Instances with dynamically allocated IP addresses Security groups act as a virtual firewall which you can configure to allow the compute instances to communicate only with scheduler instances With the Compute Instances being inherently ephemeral and with potentially limited connectivity needs it can be beneficial to have them sit within separate private address ranges to avoid the need for you to manage demand for and allocate IPs from your own pools This can be achieved either through a secondary CIDR on the VPC or with a separate VPC for the compute infrastructure connected through VPC peering The majority of AWS services relevant to financial services customers are accessible from within the VPC using AWS PrivateLink which offers private connec tivity to those services and services hosted by other AWS accounts and supported AWS Marketplace partner solutions Traffic between your VPC and the service does not leave the Amazon network and is not exposed to the public internet One of the key s to effective HPC operations are the metrics you collect and the tools to explore and manipulate them A common question from end users is “Why is my job slow?” It’s important to set up your HPC operation in a way that enables you to either answer that question or to empower user s to find it for themselves AWS offers tools you can use to collect metrics and log s at scale Amazon CloudWatch is a monitoring and management service that not only collects metrics and logs related to AWS services but through an agent it can also be a target for telemetry from HPC systems and the applications running on them This provides a valuable central sto re for your data and allows diverse data sources to be presented on a common time series and helps you to correlat e events when you diagnos e issues You can also use CloudWatch as an auditable record of the calculations that were completed with the analytics binary versions that were used You can export these logs to S3 and protect them with the object lock feature for long term immutable retention You may want to use a third party log analytics tool Many of the most common products have native integrations with Amazon Web Services Additionally Amazon Managed Service for Grafana enables you to analyze monitor and alarm on metrics This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 26 logs and traces acro ss multiple data sources including AWS third party independent software vendors ( ISVs ) databases and other resources Some grid schedulers require a relational database for the retention of statistics data For this purpose you can use Amazon Relational Database Service (Amazon RDS) which provides costefficient and resizable database capacity while automating administration tasks such as hardware provisioning patching and backups Another common challen ge with shared tenancy HPC systems is the apportioning of cost The ability to provide very granular cost metrics according to usage can drive effective business decisions within financial services The pay as you go pricing model of AWS empowers HPC manag ers and their end customers to realize the benefits from the optimization of the system or its us e AWS tools such as resource tagging and the AWS Cost Explorer can be combined t o provide rich cost data and to build reports that highlight the sources of cost within the system Tags can include details of report types cost centers or other information pertinent to the client organization There’s also an AWS Budgets tool which can be used to create reports and alerts according to consumption When you combine d etailed infrastructure costs with usage statistics you can create granular cost attribution reports Some trades are particularly demanding of HPC capacity to the extent that the business might decide to exit the trade instead of continu ing to support the cost Task scheduling and infrastructure orchestration A high performance computing system needs to achieve two goals : • Scheduling — Encompasses the lifecycle of compute tasks including: capturing and prioriti zing tasks allocating them to the appropriate compute resources and handling failures • Orchestration — Making compute capacity available to satisfy those demands It’s common for financial services organizations to use a third party grid scheduler to coordinate HPC workloads Orchestration is often a slow moving exercise in procurement and physical infrastructure provisioning Traditional schedulers are therefore highly optimized for making lowlatency scheduling decisions to maximize usage of a relative ly fixed set of resources This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 27 As customers migrate to the cloud the dynamics of the problem change s Instead of nearstatic resource orchestration capacity can be scaled to meet the demands at that instant As a result the scheduler doesn’t need to reason about which task to schedule next but rather just inform the orchestrator that additional capacity is needed Table 2 — Task scheduling and infrastructure orchestration approaches HPC hosting Task scheduling approach Infrastructure orchestration approach OnPremises Rapid task scheduling decisions to manage prioritization and maximize utilization while minimizing queue times Static a procurement and physical provisioning process run over weeks or months Cloud based Focus on managing the task lifecycle decisions around prioritization and queue times are minimized by dynamic orchestration Highly dynamic capacity on demand with ‘pay as you go’ pricing Optimized for cost and performance through selection of instance type and procurement model When you plan a migration a valid option is to migrate the on premises solution first and the n consider optimizations For example an initial ‘lift and shift’ implementation might use Amazon EC2 OnDemand Instances to provision capacity which yields some immediate benefits from elasticity Some of the commercial schedulers also have integration s with AWS which enable them to add and remove nodes according to demand When you are comfortable with running c ritical workloads on AWS you can further optimize your implementation with options such as using more native services for data management capacity provisioning and orchestration Ultimately the scheduler might be in scope for replacement at which poin t you can consider a few different approaches Though financial services workloads are often composed of very large volumes of relatively short running calculations there are some cases where longer running calculations need to be scheduled In these situ ations AWS Batch could be a viable alternative or a complementary service AWS Batch plans schedules and runs batch workloads while dynamically provisioning compute resources using containers You can configure parallel computation and job dependencies to allow for workloads where the results of one job are used by another AWS Batch is offered at no additional ch arge; only the AWS resources it consumes generate costs This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 28 Customers looking to simplify their architecture might consider a queue based architecture in which clients submit tasks to a stateful queue This can then be service d by an elastic group of hungry w orker processes that take pending workloads process them and then return results The Amazon SQS can be used for this purpose Amazon SQS is a fully managed message queuing service that is ideal for this type of decoupled architecture As a serverless offering it reduces the administrative burden of infrastructure management and offers seamless elastic scaling A simple HPC approach with Amazon SQS Amazon SQS queues can be service d by groups of Amazon EC2 instances that are managed by AWS Auto Scaling groups You can configure the AWS Auto Scaling groups to scale capacity up or down based on metrics such as average CPU load or the depth of the queue AWS Auto Scaling groups can also incorporate provisioning strategies that can combine Amazon EC2 On Demand Instances or Spot Instances to provide flexible and low cost capacity With serverless queuing provided by Amazon SQS it’s logical to think about serverless compute capacity With AWS Lambda you can run code without provisioning or managing any servers This function asaservice product allows you to only pay for the computation time you consume You can also configure Lambda to process workloads from SQS scaling out horizon tally to consume messages in a queue Lambda attempt s to process the items from the queue as quickly as possible and is constrained only by the maximum concurrency allowed by the account memory and runtime limits In 2020 these limits were increased significantly You can now allocate up to 10GB of memory and six vCPUs to your functions which also have support for the AVX2 instruction set This makes Lambda functions suitable for an even wider range of HPC applications This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 29 A serverless event driven approach to HPC Taking these concepts further the blog post Decoupled Serverless Scheduler To Run HPC Applications At Scale on EC2 describes a decoupled serverless HPC scheduler which can run on hundreds of thousands of cores using EC2 Spot Instances The following figure shows a cloud native serverless HPC scheduling architecture A cloudnative serverless scheduler architecture When you explore these alternative cloud native approaches especially in comparison to established schedulers it’s important to consider all of the features required to run This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 30 what can be a critical system Metrics gathering data management and management tooling are only some of the typical requirements that must be addressed and should not be overlooked A key benefit of running HPC workloads on AWS is the flexibility of the offerings that enable you to combine various solutions to meet very specific needs An HPC architect can use Amazon EC2 Reserved Instances for long runnin g stateful hosts You can use Amazon EC2 OnDemand Instances for long running tasks or to secure capacity at the start of a batch Additionally you can provision Amazon EC2 Spot Instances to try to deliver a batch more quickly and at lower cost Some wo rkloads can then be directed to alternative platforms such as GPU enabled instances or Lambda functions You can optimize t he overall mix of these options on a regular basis to adapt to the changing needs of your business Security and compliance The approach to security in HPC systems running in the cloud is often different from other applications This is because of the ephemeral and stateless nature of the majority of the resources Issues of patching inventory tooling or human access can be eliminated because of the short lived nature of the resources • Patching – When you use a pre patched AMI the host is in a known compliant state at startup If a relatively short limit is placed on the life of the instance it’s likely that this approach wi ll meet all necessary patching standards Additionally AWS Systems Manager Patch Manager can be used to automate the process of patching managed instan ces if necessary • Inventory tooling – Onpremises hosts typically interact with compliance and inventory systems In the AWS Cloud controls around the instance image and the delivery of binaries mean that instances remain in a known state and can be progr ammatically audited so these historic controls might not be necessary Additionally b ecause h ighly scalable and elastic resources can put excessive load on such systems fully managed cloud based solutions such as AWS CloudTrail might provide a more suitable alternative • Root access – When you enable all debugging through centralized metrics and automated reporting you c an mandate zero access to the compute nodes Without any root access you can avoid key rotation and access control issues When you consider migrating to the cloud an important early step is to decide which internal tools and processes (if any) need to be replicated in the cloud Amazon EC2 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 31 instances that are unencumbered by tooling tend to start up more quickly which is important when additional capacity is required to meet a business need Because of the stateless nature of the workloads there is of ten little need to store data for long periods particularly when the job data isn’t especially sensitive doesn’t include personally identifying information (PII) and largely consists of public market datasets Regardless encryption by default is easy to implement across a wide range of AWS services Binary analytics packages often contain proprietary code that has intellectual value financial services organizations typically encrypt these binaries while in transit and us e builtin AWS tools to ensure they’re encrypted while at rest in AWS storage If compute instances are configured for minimal or no access the risk of exfiltration while the binaries are in memory is minimized AWS has a wide range of certifications and attestations relevant to financial services and other industries For full details of AWS certifications see AWS Compliance Before you design secure systems in AWS to make sure you understand the respect ive areas of responsibility for AWS and the customer review the Shared Responsibility Model The AWS Shared Responsibility Model This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 32 This model is complemented by a n extensive suite of tools and services to help you be secure in the cloud For more detailed information review the AWS Well Architected Framework Security Pillar One service of particular interest to HPC applications is the AWS Identity and Access Management (AWS IAM) service which provides finegrained access control across all of the AWS services included in this paper IAM also offers integration with your existing identity providers through identity federation Interactions with the AWS APIs can be tracked with AWS CloudTrail a service that enables governance and auditing across the AWS account This event history simplifies security analyse s changes to resources and troubleshooting Encryption by default is becoming increasing common within financial services and many AWS services now offer simple encryption features that integrate with AWS Key Management Service (AWS KMS) This service makes it easy for you to create and manage keys that can be used across a wide variety of AWS services For HPC applications keys managed by AWS KMS might be used to encrypt AMIs or S3 buckets that contain analytics binaries or to encrypt data stored in the Parameter Store AWS KMS uses FIPS 140 2 validated hardware security modules (HSMs) to generate and protect customer keys The keys never lea ve these devices unencrypted Customers with specific internal or external rules regarding HSMs can choose AWS CloudHSM which is a fully managed FIPS 140 2 Level 3 validated HSM cluster with dedicated singletenant access Migration approaches patterns and antipatterns Many financial services organizations already have some form of HPC environment hosted in an on premises data center If you’re migrating from such an implementation it’s important to con sider what might be the best method to complete the migration The optimal approach depend s on the desired outcome risk appetite and timescale but typically begin s with one of the 6 Rs: Rehosting Replatforming Repurchasing Refactoring /Rearchitecting and (to a lesser degree ) Retiring or Retain ing (revisiting) HPC cloud migrations typically progress through three stages The nuances and timings of each stage depend s on the individual businesses involved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 33 The first stage is Bursting capacity In this mode very little changes with the existing on premises HPC environment However at times of peak demand Amazon EC2 instance s can be created and added to the system to provide additional capacity The trigger for the creation of these instances is usually either: • Scheduled – If workloads are predictable in terms of timing and scale then a simple schedule to add and remove a fi xed number of hosts at predefined times can be effective The schedule can be managed by an on premises system or with Amazon EventBridge rules • Demand ba sed – In this mode a component can monitor the performance of workloads and add or remove capacity based on demand If a task queue starts to increase additional instances can be requested through the AWS API and if the queue decreases some instances can be removed • Predictive – In some cases especially when the startup time for a new instance is long (perhaps because of very large package dependencies or complex OS builds) it might be desirable to use a simple machine learning model to an alyze historic demand and determine when to bring capacity online This approach is rare but can work well when combined with a demand based approach As customers build confidence in the ir ability to supplement existing capacity with cloud based instance s they often make a decision to complete a migration However with existing on premises hardware still available customers want to keep the value of that infrastructure before it can be decommissioned In this case it can make sense to provision a new strategic grid — with all of the same scheduler components — into the cloud and retain the existing on premises grid It’s then left to the upstream clients to direct workloads accordingly switching to the cloud based grid as the on premises capacity is gradually retired When you have completed migration and are running all of their HPC workloads in the cloud the on premises infrastructure can be removed At this point you have completed a Rehosting approach When your infrastructur e is in the cloud you then have the flexibility to look at Replatforming or Refactoring your environment The ability to build entirely new architectures in the cloud alongside existing production systems means that new approaches can be fully tested before they’re put into production One anti pattern that’s occasionally proposed by customers involves platform stacking In this approach solutions such as virtualization and/or container platforms are placed under the HPC platform to try t o create portability or parity between cloud based systems and on premises systems This approach can have some disadvantages: This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 34 • Computational inefficiency – By adding more layers between the analytics binaries and CPUs performance computational efficiency is inevitably degraded as CPU cycles are consumed by the abstraction layers • Licensing costs – HPC environments are large and continue to grow Though enterprise licenses can keep the upfront costs of using these technologies very low the large number of CPU cores involved in HPC workloads can mean significant additional costs when the licenses are due for renewal • Management overhead – In the simplest approach an Amazo n EC2 instance can be created on demand using an Amazon Linux 2 AMI This AMI is patched and up to date and because it exist s for just a few hours it require s no further management However by building HPC stacks on top of other abstractions those long running layers need patching and upgrading and when multiple layers are involved the scope for disruption through planned maintenance or an unplanned outage increases significantly • Scaling challenges – Amazon EC2 instances can be available very quickly on demand If scaling out involves the creation of a complex stack before processes can run this adds to the billing time of the instance before useful work can be done In worst case scenarios there can be a temptation to leave large numbers of instance s running so that they’re available if additional workloads arise • Optimization challenges – HPC systems are already complex especially when supporting huge volumes of variable workloads with different CPU and memory requirements Knowing where CPU and memory resources are consumed is vital to identifying bottlenecks or debugging failures If an HPC platform is based on a series of abstract ion layers this can introduce additional variables that make it difficult to see where inefficiencies exist and as a result they might never be found • Security challenges – Securing a more complex stack can be challenging because there are more component s to configure monitor and maintain to ensure the integrity of the system By defining portability in terms of a virtual machine image or a Docker image you can find a good balance of portability while off setting some of the disadvantages through the u se of cloud native virtualization with Amazon EC2 and/or container management solutions such as Amazon ECS and EKS especially when combined with AWS Fargate This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 35 Keeping HPC systems as simple as possible provides the best performance at the lowest cost Most HPC solutions are already platforms by design and offer portability through simple deployment patterns to standard operating systems Conclusion AWS has a long history of helping customers from various industries — including financial services — to optimize their HPC workloads This experience over many years from customers with diverse requirements has directly contributed to the products and services offered today and will continue to do so AWS regularly accommodates very large scale requests for Amazon EC2 instances Some of these clusters are large enough to be recognized among the world’s largest supercomputers For example a group of researchers from Clemson University created a high performance cluster on the AWS Cloud using more than 11 million vCPUs on Amazon EC2 Spot Instances running in a single AWS Region This cluster was used to study how human language is processed by computer s by analyzing over 500000 documents AWS also partnered with TIBCO to demonstrate the creation of a 13 million vCPU grid on AWS using AWS Spot Instances They were able to secure 61299 instances in total for the test which ran sample calculations based on the Strata open source analytics and market risk library from OpenGamma and was set up with their assistance TIBCO now offers their DataSynapse GridServer Manager scheduler via the AWS Mar ketplace as a ‘pay as you go’ offering The PathWise HPC solution from professional services firm Aon allows (re)insurers and pension funds to rapidly solve key insurance challenges The platform relies upon cloud compute capacity from AWS and recently moved to Amazon EC2 P3 instances powered by NVIDIA V100 Tensor Core GPUs These GPUs enable PathWise to run immense calcul ations in parallel completing in seconds or minutes analysis that can take days or weeks in traditional systems Standard Chartered cut their Grid costs by 60% by leveraging Amazon EC2 Spot Instances and recently DBS Bank shared their architecture for a scalable serverless compute grid based on AWS technologies HPC platforms are crucial enablers for many different types of financial services organizations including capital markets insurance banking and payments However as demands on these platforms increase as a result of regulatory requirements it’s clear This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 36 that the tradit ional approaches to provisioning HPC infrastructure are inefficient and ultimately unsustainable Constraints on capital and capital expenditure furth er compound the challenge By migrating these systems to AWS customers benefit from a wide variety of compute instances and relevant services but also from a fundamental change in the delivery of compute capacity This new approach offers tremendous flex ibility both in terms of the management of workloads that vary daytoday but also in the overall approach to cost optimizations security availability and operations HPC workloads already have much in common with stateless function asaservice architectural patterns Just as financial services moved from local calculations to clusters and into grids they are starting to explore decentralized serverless approaches As scaling become transparent bottlenecks will continue to be removed until process ing becomes near real time If you have challenges with the scale cost and capacity challenges of managing a high performance computing system today AWS has a number of services and partner relationships that can help To learn more you can contact AWS Financial Services through the AWS Financial Services – Contact Sales form Contributors Contributors to this document include : • Alex Kimber Solutions Architect Global Financial Services Amazon Web Services • Richard Nicholson S olutions Architect Global Financial Services Amazon Web Services • Carlos Manzanedo Rueda Specialis t Solutions Architect Amazon Web Services • Ian Meyers S olutions Architect Head of Technology Amazon Web Services Further reading For additional information see: • AWS Well Architected Framework This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services G rid Computing on AWS Page 37 • AWS Well Architected Framework – HPC Lens • AWS Well Architected Framework – Financial Services Industry Lens • AWS HPC Blog Glossary of terms The following are the definitions for the terms that appear throughout this document Binary package – A set of binaries that run tasks A typical HPC environment can support multiple packages of various versions running in parallel The package and version required are defined by the client or risk system at the point of job submission These packages typically contain proprietary models that are built by the firm’s Quantitative Analysis teams ( quants ) and are often the subject of intellectual property concerns as they can form competitive differentiation Broker – A component of a typical HPC/Grid platform The broker is typically responsible for coordinating tasks an d/or client connections to compute instances As grids and task volumes grow the number of brokers is typically scaled out to ensure throughput can be maintained Client – A software system accessed by a user that generates job requests and presents res ults In financial services this is generally some form of risk management system (RMS) Engine – A software component responsible for invoking the calculation of a task using a given binary package A compute instance can run multiple engines in parallel perhaps one or more within each Slot Grid controller – A component of a typical HPC/Grid platform The controller is responsible for tracking the state of compute instances and Brokers and hosting API or GUI interfaces and metrics The controller host is generally not involved in the scheduling of individual tasks Instance – An Amazon EC2 virtual server Each instance ha s a number of available virtual CPUs (vCPU s) and an allocation of memory Job (or session) – The definition of a series of one or more related tasks For example a job might define a series of scenarios and how they are sub divided into a set of tasks This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 38 Job data – The set of data that is required in addition to the task metadata Typically job data is passed to the compute instance as a reference bypassing the scheduler itself In investment banking applications job data is generally a combination of static reference data (such as holiday calendars used to calculate trade expir ation dates ) marke t data (used to build the market environment ) and trade data (referencing the trade or portfolio of trades which are the focus of the calculation ) Quantitative analys ts / Quant s – The team associated with the development of mathematical models to predict the behavior of financial products Risk management system (RMS) – To improve oversight of risk calculations centralize operations and improve efficiency financial services firms are increasingly leveraging risk management systems to sit between the users and the HPC platform Scheduler /Grid scheduler – A software compone nt responsible for managing the lifecycle of tasks through receipt allocation to compute instances collection of results and metrics and management processes Slot – A unit of compute currency used to approximate homogeneity within a heterogenous comput e environment For example a slot might be defined as two CPU cores and 8GB of RAM and would be considered interchangeable regardless of whether the compute instance was able to provide two or 32 slots Task – A unit of work to be scheduled to a compute instance A task can define external dependencies (such as market and reference data) In recursive workload patterns a parent task can spawn a child Job or a series of other child tasks Thread – An engine run s either single threaded or multi threaded p rocesses Ideally each thread run s on a separate vCPU to minimize the overhead of CPU context switching User – In financial services a user is t ypically a member of the front office either a trader managing positions or desk head who wants oversight and ensur es successful internal or external reporting is completed This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 39 Document versions Date Description August 24 2021 Updates to reflect AWS service improvements more modern and inclusive terminology and new cloud native architectures September 2019 Updates to services diagrams and topology January 2016 Updates to services and topology January 2015 Initial publication
General
Using_AWS_in_the_Context_of_Malaysian_Privacy_Considerations
Using AWS in the Context of Malaysian Privacy Considerations Published April 2014 Updated December 22 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assura nces from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved Contents Overview 1 Scope 1 Customer Content: Considerations relevant to privacy and data protection 2 AWS shared responsibility approach to managing cloud security 2 Understanding security OF the cloud 4 Understanding security IN the cloud 4 AWS Regions: Where will content be stored? 5 Selecting AWS Global Regions in the AWS Management Console 6 Transfer of personal data cross border 7 Who can access customer content? 8 Customer control over content 8 AWS access to customer content 8 Government rights of access 8 AWS policy on granting government access 9 Privacy and Data Protection in Malaysia: The PDPA 10 Privacy breaches 14 Other considerat ions 15 Closing remarks 15 Additional resources 15 Further reading 15 Document history 16 Amazon Web Services Using AWS in the Context of Malaysian Privacy Considerations 1 Overview This doc ument provides information to assist customers who want to use AWS to store or process content containing personal data in the context of key Malaysia privacy considerations and the Personal Data Protection Act 2010 (“ PDPA ”) It will help customers understand: • How AWS services operate including how customers can address security and encrypt their content • The geographic locations where customers can choose to store content and other relevant considerations • The respective roles the customer and AWS each play in managing and securing content stored on AWS services Scope This whitepaper focuses on typical questions asked by AWS customers when they are considering the implications of the PDPA on their use of AWS services to store or process content cont aining personal data There will also be other relevant considerations for each customer to address for example a customer may need to comply with industry specific requirements the laws of other jurisdictions where that customer conducts business or c ontractual commitments a customer makes to a third party This paper is provided solely for informational purposes It is not legal advice and should not be relied on as legal advice As each customer’s requirements will differ AWS strongly encourages cu stomers to obtain appropriate advice on their implementation of privacy and data protection requirements and on applicable laws and other requirements relevant to their business When we refer to content in this paper we mean software (including virtual machine images) data text audio video images and other content that a customer or any end user stores or processes using AWS services For example a customer’s can content include objects that the customer stores using Amazon Simple Storage Servic e (Amazon S3) files stored on an Amazon Elastic Block Store (Amazon EBS) volume or the contents of an Amazon DynamoDB database table Such content may but will not necessarily include personal data relating to that customer its end users or third par ties The terms of the AWS Customer Agreement or any other relevant agreement with us governing the use of AWS services apply to customer content Customer content does not include data that a customer provides to us in connection with the creation or ad ministration of its AWS accounts such as a customer’s names phone numbers email addresses and billing information —we refer to this as account information and it is governed by the AWS Privacy Notice Amazon Web Services Using AWS in the Context of Malaysian Privacy Considerations 2 Customer Content: Considerations relevant to privacy and data protection Storage of content presents all organizations with a number of common practical matters to consider including: • Will the content be secure? • Where will content be stored? • Who will have access to content? • What laws and regulations apply to the content and what is needed to comply with these ? These considerations are not new and are not cloud specific They are relevant to internally hosted and operated systems as well as traditional third party hosted services Each may involve storage of content on third party equipment or on third party premises with that content managed accessed or used by third party personnel When using AWS services each AWS customer maintains ownership and control of their content including control over: • What content they choose to store or process using AWS services • Which AWS services they use with their content • The Region(s) where their content is stored • The format structure and security of their content including whether it is masked anonymized or encrypted • Who has access to their AWS accounts and content and how those access rights are granted managed and revoked Because AWS customers retain ownership and control over their content within the AWS environment they also retain responsibilities rel ating to the security of that content as part of the AWS Shared Responsibility Model This model is fundamental to understanding the respective roles of the customer and AWS in the context of privacy and data protection requirements that may apply to content that customers choose to store or process using AWS services AWS shared responsibility approach to managing cloud security Will customer content be secure? Moving IT infrastructure to AWS creates a shared responsibility model between the customer and AWS as both the customer and AWS have important roles in the operation and management of security AWS operates manages and controls the components from the host operating system and virtualization layer Amazon Web Services Using AWS in the Context of Malaysian Privacy Considerations 3 down to the physical security of the facilitie s in which the AWS services operate The customer is responsible for management of the guest operating system (including updates and security patches to the guest operating system) and associated application software as well as the configuration of the AWSprovided security group firewall and other security related features The customer will generally connect to the AWS environment through services the customer acquires from third parties (for example internet service providers) AWS does not provide the se connections and they are therefore part of the customer's area of responsibility Customers should consider the security of these connections and the security responsibilities of such third parties in relation to their systems The respective roles of the customer and AWS in the shared responsibility model are shown below: Figure 1: Shared Responsibility Model What does the shared responsibility model mean for the security of customer content? When eva luating the security of a cloud solution it is important for customers to understand and distinguish between: • Security measures that the cloud service provider (AWS) implements and operates – “security of the cloud” • Security measures that the customer implements and operates related to the security of customer content and applications that make use of AWS services – “security in the cloud” While AWS manages security of the cloud security in the cloud is the responsibility of the customer as customers retain control of what security they choose to implement to protect their own content Amazon Web Services Using AWS in t he Context of Malaysian Privacy Considerations 4 applications systems and networks – no differently than they would for applications in an on site data center Understanding security OF the cloud AWS is responsible f or managing the security of the underlying cloud environment The AWS cloud infrastructure has been architected to be one of the most flexible and secure cloud computing environments available designed to provide optimum availability while providing compl ete customer segregation It provides extremely scalable highly reliable services that enable customers to deploy applications and content quickly and securely at massive global scale if necessary AWS services are content agnostic in that they offer th e same high level of security to all customers regardless of the type of content being stored or the geographical region in which they store their content AWS’s world class highly secure data centers utilize state ofthe art electronic surveillance and multi factor access control systems Data centers are staffed 24x7 by trained security guards and access is authorized strictly on a least privileged basis For a complete list of all the security measures built into the core AWS Cloud infrastructure an d services see Best Practices for Security Identity & Compliance We are vigilant about our customers' security and have implemented sophisticated technical and physical me asures against unauthorized access Customers can validate the security controls in place within the AWS environment through AWS certifications and reports These include the AWS System & Organiza tion Control (SOC) 1 2 and 3 reports ISO 27001 27017 27018 and 9001 certifications and PCI DSS compliance reports Our ISO 27018 certification demonstrates that AWS has a system of controls in place that specifically address the privacy protection of customer content These reports and certifications are produced by independent third party auditors and attest to the design and operating effectiveness of AWS security controls AWS compliance certifications and reports can be requested at https://pagesawscloudcom/compliance contact ushtml More information on AWS compliance certifications reports and alignment with best practices and standards can be found at AWS Comp liance Understanding security IN the cloud Customers retain ownership and control of their content when using AWS services Customers rather than AWS determine what content they store or process using AWS services Because it is the customer who decides what content to store or process using AWS services only the customer can determine what level of security is appropriate for the content they store and process using AWS Customers also have complete control over which services they use and whom they em power to access their content and services including what credentials will be required Customers control how they configure their environments and secure their content including whether they encrypt their content (at rest and in transit) and what other security features and tools they use and how they use them AWS does not change customer configuration settings as these settings are determined and controlled by the customer AWS customers have the complete freedom to design their security architecture to meet their compliance needs This is a key difference from traditional hosting Amazon Web Services Using AWS in the Context of Malaysian Privacy Considerations 5 solutions where the provider decides on the architecture AWS enables and empowers the customer to decide when and how security measures will be implemented in the cloud in accordance with each customer's business needs For example if a higher availability architecture is required to protect customer content the customer may add redundant systems backups locations network uplinks etc to create a more resilient high availability architecture If restricted access to customer content is required AWS enables the customer to implement access rights management controls both on a systems level and through encryption on a data level To assist customers in designing imple menting and operating their own secure AWS environment AWS provides a wide selection of security tools and features customers can use Customers can also use their own security tools and controls including a wide variety of third party security solutions Customers can configure their AWS services to leverage a range of such security features tools and controls to protect their content including sophisticated identity and access management tools security capabilities encryption and network security Examples of steps customers can take to help secure their content include implementing: • Strong password policies enabling Multi Factor Authentication (MFA) assigning appropriate permissions to users and taking robust steps to protect their access keys • Appropriate firewalls and network segmentation encrypting content and properly architecting systems to decrease the risk of data loss and unauthorized access Because customers rather than AWS control these important factors customers retain responsibil ity for their choices and for security of the content they store or process using AWS services or that they connect to their AWS infrastructure such as the guest operating system applications on their compute instances and content stored and processed in AWS storage databases or other services AWS provides an advanced set of access encryption and logging features to help customers manage their content effectively including AWS Key Management Service and AWS CloudTrail To assist customers in inte grating AWS security controls into their existing control frameworks and help customers design and execute security assessments of their organization’s use of AWS services AWS publishes a number of whitepapers relating to security governance risk and compliance; and a number of checklists and best practices Customers are also free to design and execute security assessments according to their own preferences and can request permission to conduct scans of their cloud infrastructure as long as those scans are limited to the customer’s compute instances and do not violate the AWS Acceptable Use Policy AWS Regions: Where will content be stored? AWS data centers are built in clusters in various global regions We refer to each of our data center clusters in a given country as an “AWS Region ” Amazon Web Services Using AWS in the Context of Ma laysian Privacy Considerations 6 Customers have access to a number of AWS Regions around the world 1 Customers can choose to use one Region all Regions or a ny combination of AWS Regions For a list of AWS Regions and a real time location map see Global Infrastructure AWS customers choose the AWS Region or Regions in which their content and servers will be located This allows customers with geographic specific requirements to establish environments in a location or locations of their choice For example AWS customers in Malaysia can choose to deploy their AWS services exclusively in one AWS Region such as the Asia Pacific (Singapore) Region and store their content in Singapore if this is their preferred location If the customer makes this choice AWS will not move their content from Singapore without the customer’s consent except as l egally required Customers always retain control of which AWS Region(s) are used to store and process content AWS only stores and processes each customer ’s content in the AWS Region(s) and using the services chosen by the customer and otherwise will no t move customer content without the customer’s consent except as legally required Selecting AWS Global Regions in the AWS Management Console The AWS Management Console gives customers secure login using their AWS or IAM account credentials When using th e AWS management console or in placing a request through an AWS Application Programming Interface (API) the customer identifies the particular AWS Region(s) where it wishes to use AWS services 1 AWS GovCloud (US) is an isolated AWS Region designed to allow US government agencies and customers to move sensitive workloads into the cloud by addressing their specific regulatory and Amazon Web Services Using AWS in the Context of Malaysian Privacy Considerations 7 The figure below provides an example of the AWS Region sel ection menu presented to customers when uploading content to an AWS storage service or provisioning compute resources using the AWS management console Any compute and other resources launched by the customer will be located in the AWS Region designated by the customer For example when customer chooses the Asia Pacific (Singapore) Region for its compute resources such as Amazon EC2 or AWS Lambda launched in that environment would only reside in the Asia Pacific (Singapore) Region This option can also be leveraged for other AWS Regions Transfer of personal data cross border In 2016 the European Commission approved and adopted the new General Data Protection Regulation (GDPR) The GDPR replaced the EU Data Protection Directive as well as all local laws relating to it All AWS services comply with the GDPR AWS provides customers with services and resources to help them comply with GDPR requirements that may apply to their operations These include AWS’ adherence to the CISPE code of conduct granula r data access controls monitoring and logging tools encryption key management audit capability adherence to IT security standards and AWS’ s C5 attestations For additional information see the AWS General Data Protection Regulation (GDPR) Center and see the Navigating GDPR Compliance on AWS whitepaper When using AWS services customers may choose to transfer content containing personal data cross border and they will need to consider the legal requirements that apply to such transfers AWS provides a Data Processing Addendum that includes the Standard Contractual Clauses 2010/87/EU (oft en referred to as “Model Clauses”) to AWS customers transferring content containing personal data (as defined in the GDPR) from the EU to a country outside of the European Economic Area (EEA) such as Amazon Web Services Using AWS in the Context of Malaysian Privacy Considerations 8 Singapore With our EU Data Processing Addendum and Model Clauses AWS customers —whether established in Europe or a global company operating in the European Economic Area —can continue to run their global operations using AWS in full compliance with the GDPR The AWS Data Processing Addendum is incorporated in the AWS Service Terms and applies automatically to the extent the GDPR applies to the customer’s processing of personal data on AWS Who can access customer content? Customer control over content Customers using AWS maintain and do not release effective c ontrol over their content within the AWS environment They can: • Determine where their content will be located for example the type of storage they use on AWS and the geographic location (by AWS Region) of that storage • Control the format structure and sec urity of their content including whether it is masked anonymized or encrypted AWS offers customers options to implement strong encryption for their customer content in transit or at rest and also provides customers with the option to manage their own e ncryption keys or use third party encryption mechanisms of their choice • Manage identity and access management controls to their content such as by using AWS Identity and Access Management (IAM) and by setting appropriate permissions and security credenti als to access their AWS environment and content This allows AWS customers to control the entire life cycle of their content on AWS and manage their content in accordance with their own specific needs including content classification access control retention and deletion AWS access to customer content AWS makes available to each customer the compute storage database networking or other services as described on our website Customers have a number of options to encrypt their content when using the services including using AWS encryption features (such as AWS Key Management Service) managing their own encryption keys or using a third party encryption mechanism of their own choice AWS does not access or use customer content without the customer’s consent except as legally required AWS never uses customer content or derives information from it for other purposes such as marketing or advertising Government rights of access Queries are often raised about the rights of domestic and foreign governmen t agencies to access content held in cloud services Customers are often confused about issues of data sovereignty including whether and in what circumstances governments may have access to their content Amazon Web Services Using AWS in the Context of Malaysian Privacy Considerations 9 The local laws that apply in the jurisdiction wher e the content is located are an important consideration for some customers However customers also need to consider whether laws in other jurisdictions may apply to them Customers should seek advice to understand the application of relevant laws to their business and operations When concerns or questions are raised about the rights of domestic or foreign governments to seek access to content stored in the cloud it is important to understand that relevant government bodies may have rights to issue reques ts for such content under laws that already apply to the customer For example a company doing business in Country X could be subject to a legal request for information even if the content is stored in Country Y Typically a government agency seeking acc ess to the data of an entity will address any request for information directly to that entity rather than to the cloud provider Most countries have legislation that enables law enforcement and government security bodies to seek access to information In f act most countries have processes (including Mutual Legal Assistance Treaties) to enable the transfer of information to other countries in response to appropriate legal requests for information (eg relating to criminal acts) However it is important to remember that each relevant law will contain criteria that must be satisfied in order for the relevant law enforcement body to make a valid request For example the government agency seeking access may need to show it has a valid reason for requiring a p arty to provide access to content and may need to obtain a court order or warrant Many countries have data access laws which purport to apply extraterritorially An example of a US law with extra territorial reach that is often mentioned in the context of cloud services is the US Patriot Act The Patriot Act is similar to laws in other developed nations that enable governments to obtain information with respect to investigations relating to international terrorism and other foreign intelligence issues Any request for documents under the Patriot Act requires a court order demonstrating that the request complies with the law including for example that the request is related to legitimate investigations The Patriot Act generally applies to all compan ies with an operation in the US irrespective of where they are incorporated and/or operating globally and irrespective of whether the information is stored in the cloud in an on site data center or in physical records This means that companies headqua rtered or operating outside the United States which also do business in the United States may find they are subject to the Patriot Act by reason of their own business operations AWS policy on granting government access AWS is vigilant about customers' s ecurity and does not disclose or move data in response to a request from the US or other government unless legally required to do so in order to comply with a legally valid and binding order such as a subpoena or a court order or as is otherwise requir ed by applicable law Non US governmental or regulatory bodies typically must use recognized international processes such as Mutual Legal Assistance Treaties with the US government to obtain valid and binding orders Additionally our practice is to notify customers where practicable before disclosing their content so they can seek protection from disclosure unless we are legally prohibited from doing so or there is clear Amazon Web Services Using AWS in the Context of Malaysian Privacy Considerations 10 indication of illegal conduct in connection with the use of AWS services For a dditional information see the Amazon Information Requests Portal online Privacy and Data Protection in Malaysia: The PDPA This part of the paper discusses aspects of the PDPA relating to data protection The PDPA contains several data protection principles (“Data Protection Principles”) which impose requirements for collecting managing dealing with using disclosing and otherwise handling personal data The PDPA makes a distinction between a “data user ” who processes any personal data or has control or authorizes the processing of any personal data and a “data processor ” who processes personal data solely on behal f of the data user and does not process the personal data for any of its own purposes AWS appreciates that its services are used in many different contexts for different business purposes and that there may be multiple parties involved in the data lifec ycle of personal data included in customer content stored or processed using AWS services For simplicity the guidance in the table below assumes that in the context of customer content stored or processed using AWS services the customer: • Collects perso nal data from its end users or other individuals (data subjects) and determines the purpose for which the customer requires and will use the personal data • Has the capacity to control who can access update and use the personal data • Manages the relationshi p with the individual about whom the personal data relates including by communicating with the data subject as required to comply with any relevant disclosure and consent requirements Customers may in fact work with (or rely on) third parties to dischar ge these responsibilities but the customer rather than AWS would manage its relationships with those third parties We summarize the key requirements of the Data Protection Principles in the table below We also discuss aspects of the AWS services relev ant to these requirements Amazon Web Services Using AWS in the Context of Malaysian Privacy Considerations 11 Data Protection Principle Summary of Data Protection Obligations Considerations General Principle and Notice and Choice Principle Personal data can only be processed once the data subject has given his/her consent Data users should inform the data subject of the purposes for which their personal data is being collected and processed Customer: The customer determines and controls when how and why it collects personal data from individuals and decides whether it will include t hat personal data in customer content it stores or processes using AWS services The customer may also need to ensure it discloses the purposes for which it collects that data to the relevant individuals ; obtains the data from a permitted source ; and that it only uses the data for a permitted purpose As between the customer and AWS the customer has a relationship with the individuals whose personal data the customer stores on AWS and therefore the customer is able to communicate directly with them about collection and treatment of their personal data The customer rather than AWS will also know the scope of any notifications given to or consents obtained by the customer from such individuals relating to the collection use or disclosure of their personal data The customer will know whether it uses AWS services to store or process customer content containing personal data and therefore is best placed to inform individuals that it will use AWS as a service provider if required AWS: AWS does not collect personal data from individuals whose personal data is included in content a customer stores or processes using AWS and AWS has no contact with those individuals Therefore AWS is not required and is unable in the circumstances to communicate with the relevant individuals AWS only uses customer content to provide the AWS services selected by each customer to that customer and does not use customer content for any other purposes Amazon Web Services Using AWS in the Context of Malaysian Privacy Considerations 12 Data Protection Principle Summary of Data Protection Obligations Considerations Disclosure Principle Personal data should only be disclose d with consent and only for the purposes disclosed to the data subject Customer : The customer determines and controls why it collects personal data what it will be used for who it can be used by and who it is disclosed to The customer should ensure it only does so for permitted purposes If the customer chooses to include personal data in customer content stored in AWS the customer controls the format and structure of its content and how it is protected from disclosure to unauthorized parties including whether it is anonymized or encrypted The customer will know whether it uses the AWS services to store or process customer content containing personal data and therefore is best placed to inform individuals that it will use AWS as a service pr ovider if required AWS : AWS only uses customer content to provide the AWS services selected by each customer to that customer and does not use customer content for other purposes Security Principle A data user should take practical steps to protect personal data from loss misuse modification unauthorized or accidental access or disclosure alteration or destruction Customer: Customers are responsible for security in the cloud including security of their content (and personal data included in the ir content) AWS: AWS is responsible for managing the security of the underlying cloud environment For a complete list of all the security measures built into the core AWS cloud infrastructure and services see Best Practices for Security Identity & Compliance Customers can validate the security controls in place within the AWS environment through AWS certifications and reports including the AWS System & Organization Control (SOC) 1 2 and 3 reports ISO 27001 27017 and 27018 and PCI DSS compliance reports Retention Principle Personal data should not be kept longer than necessary for the fulfilment of the purpose for which the personal data was collected Customer: Only the customer knows why personal data included in customer co ntent stored or processed using AWS services was collected and only the customer knows when it is for relevant business purposes The customer should delete or destroy the personal data when no longer needed AWS: AWS services provide the customer with co ntrols to enable the customer to delete content as described in AWS Documentation Amazon Web Services Using AWS in the Context of Malaysian Privacy Considerations 13 Data Protection Principle Summary of Data Protection Obligations Considerations Data Integrity Principle The data user should take all reasonable steps to ensure that personal data is accurate complete not misleading and kept up todate having regard to the purpose for which the personal data was collected Customer: When a customer chooses to store or process content containing personal data using AWS services the customer has control ove r the quality of that content and the customer retains access to and can correct it This means that the customer should take all required steps to ensure that personal data included in customer content is accurate complete not misleading and kept uptodate AWS: AWS’s SOC 1 Type 2 report includes controls that provide reasonable assurance that data integrity is maintained through all phases including transmission storage and processing Offshoring Principle A data user should not transfer personal data to a place outside Malaysia other than such place as specified by the Minister unless an exception applies Customer: The customer can choose the AWS Region or Regions in which their content will be located and can choose to deploy their AWS services exclusively in a single Region if preferred AWS services are structured so that a customer maintains effective control of customer content regardless of what Region they use for their content The customer should disclose to individuals the locations in which it stores or processes their personal data and obtain any required consents relating to such locations from the relevant individuals if necessary As between the customer and AWS the customer has a relationship with the individuals whose personal dat a the customer stores or processes using AWS services and therefore the customer is able to communicate directly with them about such matters AWS: AWS only stores and processes each customer ’s content in the AWS Region(s) and using the services chosen b y the customer and otherwise will not move customer content without the customer’s consent except as legally required If a customer chooses to store content in more than one Region or copy or move content between Regions that is solely the customer’s choice and the customer will continue to maintain effective control of its content wherever it is stored and processed General: AWS is ISO 27001 certified and offers robust security feat ures to all customers regardless of the geographical Region in which they store their content Amazon Web Services Using AWS in the Context of Malaysian Privacy Considerations 14 Data Protection Principle Summary of Data Protection Obligations Considerations Access Principle A data user should provide a data subject access to their personal data and they should be able to correct their personal data Customer: The customer retains control of content stored or processed using AWS services including control over how that content is secured and who can access and amend that content In addition as between the customer and AWS the customer has a relationship with the individuals whose personal data is included in customer content stored or processed using AWS services The customer rather than AWS is therefore able to work with relevant individuals to provide them access to and the ability to correct personal data i ncluded in customer content AWS: AWS only uses customer content to provide the AWS services selected by each customer to that customer and AWS has no contact with the individuals whose personal data is included in content a customer stores or processes u sing the AWS services Given this and the level of control customers enjoy over customer content AWS is not required and is unable in the circumstances to provide such individuals with access to or the ability to correct their personal data Data Us er Registration The PDPA makes it a requirement for specified classes of data users to register with the Personal Data Protection Commissioner as data users Customer: The c ustomer should determine whether it falls within any of the specified classes of data users that are required to register AWS: AWS does not fall within any of the specified classes of data users that are required to be registered Privacy breaches Given that customers maintain control of their content when using AWS customers retain the responsibility to monitor their own environment for privacy breaches and to notify regulators and affected individuals as required under applicable law Only the customer is able to manage this responsibility A customer’s AWS access keys can be used as an example to help explain why the customer rather than AWS is best placed to manage this responsibility Customers control access keys and determine who is authorised to access their AWS account AWS does not have visibility of access keys or of who is and who is not authorized to log into an account Therefore the customer is responsible for monitoring use misuse distribution or loss of access keys Amazon Web Services Using AWS in the Context of Malaysian Privacy Considerations 15 In some jurisdictions it is mandatory to notify individuals or a regulator of unauthorized access to or disclosure of their personal data and there may be circumstances in which notifying individuals is the best approach in order to mitigate risk even though it is not mandatory under the applicable law It is for the customer to determine when it is appropriate or necessary for them to notify individuals and the notification process they will follow Other considerations This whitepaper does not discuss specific privacy or data protection laws other than the PDPA Customers should consider the speci fic requirements that apply to them including any industry specific requirements The relevant privacy and data protection laws and regulations applicable to individual customers will depend on several factors including where a customer conducts business the industry in which they operate the type of content they wish to store where or from whom the content originates and where the content will be stored Customers concerned about their privacy regulatory obligations should first ensure they identify a nd understand the requirements applying to them and seek appropriate advice Closing remarks At AWS security is always our top priority We deliver services to millions of active customers including enterprises educational institutions and government a gencies in over 190 countries Our customers include financial services providers and healthcare providers and we are trusted with some of their most sensitive information AWS services are designed to give customers flexibility over how they configure and deploy their solutions as well as control over their content including where it is stored how it is stored and who has access to it AWS customers can build their own secure applications and store content securely on AWS Additional resources To help c ustomers further understand how they can address their privacy and data protection requirements customers are encouraged to read the risk compliance and security whitepapers best practices checklists and guidance published on the AWS website This mate rial can be found at https://awsamazoncom/compliance and https://awsamazoncom/security Further reading AWS also offers training to help customers learn how to design develop and operate available efficient and secure applications on the AWS cloud and gain proficiency with AWS services and solutions We offer free instructional videos selfpaced labs and instructor led classes Further information on AWS training is available at http s://awsamazoncom/training/ Amazon Web Services Using AWS in the Context of Malaysian Privacy Considerations 16 AWS certifications certify the technical skills and knowledge associated with the best practices for building secure and reliable cloud based applications using AWS technology Further i nformation on AWS certifications is available at http s://awsamazoncom/certification/ If you require further information contact AWS at https://aws amazoncom/contact us/ or contact your local AWS account representative Document history Date Description December 2021 Reviewed for technical accuracy May 2018 Fourth publication April 2018 Third publication January 2016 Second publication April 2014 First publication
General
Building_Big_Data_Storage_Solutions_Data_Lakes_for_Maximum_Flexibility
Building Big Data Storage Solutions (Data Lakes) for Maximum Flexibility July 2017 Archived This document has been archived For the most recent version refer to : https://docsawsamazoncom/whitepapers/latest/ buildingdatalakes/buildingdatalakeawshtml© 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its a ffiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedContents Introduction 1 Amazon S3 as the Data Lake Storage Platform 2 Data Ingestion Methods 3 Amazon Kinesis Firehose 4 AWS Snowball 5 AWS Storage Gateway 5 Data Cataloging 6 Comprehensive Data Catalog 6 HCatalog with AWS Glue 7 Securing Protecting and Managing Data 8 Access Policy Options and AWS IAM 9 Data Encryption with Amazon S3 and AWS KMS 10 Protecting Data with Amazon S3 11 Managing Data with Object Tagging 12 Monitoring and Optimizing the Data Lake Environment 13 Data Lake Monitoring 13 Data Lak e Optimization 15 Transforming Data Assets 18 InPlace Querying 19 Amazon Athena 20 Amazon Redshift Spectrum 20 The Broader Analytics Portfolio 21 Amazon EMR 21 Amazon Machine Learning 22 Amazon QuickSight 22 Amazon Rek ognition 23 ArchivedFuture Proofing the Data Lake 23 Contributors 24 Document Revisions 24 ArchivedAbstract Organizations are collecting and analyzing increasing amounts of data making it difficult for traditional on premises solutions for data storage data management and analytics to keep pace Amazon S3 and Amazon Glacier provide an ideal storage solution for data lakes They provide options such as a breadth and depth of integration with traditional big data analytics tools as well as innovative query inplace analytics tools that help you eliminate costly and complex extract transform and load processes This guide explains each of these optio ns and provides best practi ces for building your Amazon S3 based data lake ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 1 Introduction As o rganizations are collecting and analyzing increasing amounts of data traditional onpremise s solutions for data storage data management and analytics can no longer keep pace Data siloes that aren’t built to work well together make storage consolidation for more comprehensive and efficient analytics difficult This in turn limit s an organization’s agility ability to derive more insights and value from its data and capability to seamles sly adopt more sophisticated analytics tools and processes as its skills and needs evolve A data lake which is a single platform combining storage data governance and analytics is designed to address these challenges It’s a centralized secure and durable cloud based storage platform that allows you to ingest and store structured and unstructured data and transform these raw data assets as needed You don’t need an innovation limiting pre defined schema You can use a complete portfolio of data exploration reporting analytics machine learning and visualization tools on the data A data lake makes data and the optimal analytic s tools available to more users across more lines of business allowing them to get all of the business insights they need whe never they need them Until recently the data lake had been more concept than reality However Amazon Web Services (AWS) has developed a data lake architecture that allows you to build data lake solutions costeffectively using Amazon Simple Sto rage Service (Amazon S3) and other services Using the Amazon S3 based data lake architecture capabilities you can do the following : • Ingest and store data from a wide variety of sources into a centralized platform • Build a comprehensive data catalog to fin d an d use data assets stored in the data lake • Secur e protect and manag e all of the data stored in the data lake • Use t ools and policies to monitor analyze and optimize infrastructure and data • Transform raw data assets in place into optimized usable formats • Query data assets in place ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 2 • Use a b road and deep portfolio of data analytics data science machine learning and visualization tools • Quickly integrat e current and future third party data processing tools • Easily and securely shar e process ed datasets and results The remainder of this paper provide s more information about each of these capabil ities Figure 1 illustrates a sample AWS data lake platform Figure 1: Sample AWS data lake platform Amazon S3 as the Data Lake Storage Platform The Amazon S3 based data lake solution uses Amazon S3 as its primary storage platform Amazon S3 provides an optimal foundation for a data lake because of its virtually unlimited scalability You can seamlessly and nondisruptively increase storage from gigabyt es to petabytes of content paying only for what you use Amazon S3 is designed to provide 99999999999% durability It has scalable performance ease ofuse features and native encryption and access control capabilities Amazon S3 integrates with a broad portfolio of AWS and third party ISV data processing tools Key data lake enabling features of Amazon S3 include the following : ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 3 • Decoupling of storage from compute and data processing In traditional Hadoop and data warehouse solutions storage and compute are tightly coupled making it difficult to optimize costs and data processing workflows With Amazon S3 you can cost effectively store all data types in their native formats You can then launch as many or as few v irtual servers as you need using Amazon Elastic Compute Cloud (EC2) and you can use AWS analytics tools to process your data You can o ptimize your EC2 instances to provide the right ratios of CPU memory and bandwidth for best performance • Centralized data architecture Amazon S3 makes it easy to build a multi tenant environment where many users can bring their own data analytics tools to a common set of data This improv es both cost and data governance over that of traditional solutions which require multiple copies of data to be distributed across multiple processing platforms • Integration with clusterless and serverless AWS services Use Amazon S3 with Amazon Athena Amazon Redshift Spectrum Amazon Rekognition and AWS Glue to query and process data Amazon S3 also integrates with AWS Lambda serverless computing to run code without provisioning or managing servers With all of these capabilities you only pay for the actual amounts of data you process or for the compute time that you consume • Standardized APIs Amazon S3 R EST ful APIs are simple easy to use and supported by most major third party independent software vendors (ISVs ) including leading Apache Hadoop and analytic s tool vendors This allows customers to bring th e tools they are most comfortable with and knowledgeable about to help them perform analytics on data in Amazon S3 Data Ingest ion Methods One of the core capabilities of a data lake architecture is the ability to quickly and easily ingest multiple types o f data such as real time streaming data and bulk data assets from onpremise s storage platforms as well as data generated and processed by legacy on premise s platforms such as mainframes and data warehouses AWS provides services and capabilities to cover all of these scenarios ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 4 Amazon Kinesis Firehose Amazon Kinesis Firehose is a fully managed service for delivering real time streaming data directly to Amazon S3 Kinesis Firehose automatically scales to match the volume and throughput of streaming data and requires no ongoing administr ation Kinesis Fireho se can also be configured to transform streaming data before it ’s stored in Amazon S3 Its transformation capabilities include compression encryption data batching and Lambda func tions Kinesis Fireho se can compress data before it’ s stored in Amazon S3 It currently supports GZIP ZIP and SNAPPY compression formats GZIP is the preferred format because it can be used by Amazon Athena Amazon EMR and Amazon Redshift Kinesis Fire hose encryption supports Amazon S3 server side encryption with AWS Key Management Service (AWS KMS) for encrypting delivered data in Amazon S3 You can choose not to encrypt the data or to encrypt with a key from the list of AWS KMS keys that you own (see the section Encryption with AWS KMS ) Kinesis Firehose can concatenate multiple incoming records and then deliver them to Amazon S3 as a single S3 object This is an important capability because it reduces Amazon S3 transaction costs and transactions per second load Finally Kinesis Firehose can invoke Lambda functions to transform incoming source data and deliver it to Amazon S3 Common transformation func tions include transforming Apache Log and Syslog formats to standardized JSON and/or CSV formats The JSON and CS V formats can then be directly queried using Amazon Athena If using a Lambda data transformation you can optionally back up raw source data t o another S3 bucket as Figure 2 illustrates ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 5 Figure 2: Delivering real time streaming data with Amazon Kinesis Firehose to Amazon S3 with optional backup AWS Snowball You can use AWS Snowball to securely and efficiently migrate bulk data from onpremise s storage platforms and Hadoop clusters to S3 buckets After you create a job in the AWS Management Console a Snowball appliance will be automatically shipped to you After a Snowball arrives connect it to your local network install the Snowball client on your on premises data source and then use the Snowball client to select and transfer the file directories to the Snowball device The Snowball client uses AES 256bit encrypt ion Encryption keys are never shipped with the Snowball devic e so the data transfer process is highly secure After the data transfer is complete the Snowball’s E Ink shipping label will automatically update Ship the device back to AWS Upon receipt at AWS your data is then transferred from the Snowball device t o your S3 bucket and stored as S3 objects in their original/native format Snowball also has an HDFS client so data may be migrated directly from Hadoop clusters into an S3 bucket in its native format AWS Storage Gateway AWS Storage Gateway can be used to integrate legacy on premise s data processing platforms with an Amazon S3 based data lake The File Gateway configuration of Storage Gateway offers onpremise s devices and applications a network file share via an NFS connection Files written to this mount point are converted to objects stored in Amazon S3 in their original format without any ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 6 proprietary modification This means that you can easily integrate applications and platforms that don’t have native Amazon S3 capabilities —such as on premise s lab equipment mainframe computers databases and data warehouses —with S3 buckets and then use tools such as Amazon EMR or Amazon Athena to process this data Additionally Amazon S3 natively support s DistCP which is a standard Apache Hadoop data transfer mechanism This allows you to run DistCP jobs to transfer data from an on premises Hadoop cluster to an S3 bucket The command to transfer data typically look s like the following : hadoop distcp hdfs://source folder s3a://destination bucket Data Cataloging The earliest challenges that inhibited building a data lake were keeping track of all of the raw assets as they were loaded into the data l ake and then tracking all of the new data assets and versions that were created by data trans formation data processing and analytics Thus a n essential component of an Amazon S3 based data lake is the data catalog The data catalog provides a query able interface of all assets stored in the data lake’s S3 buckets The data catalog is designed to provide a single source of truth about the contents of the data lake There are two general forms of a data catalog : a comprehensive data catalog that contains information about all assets that have been ingested into the S3 data lake and a Hive Metastore Catalog (HCatalog) that contains information about data assets that have been transformed into formats and table definitions that are usable by analytics tools like Amazon Athena Amazon Redshift Amazon Redshift Spectrum and Amazon EMR The two catalogs are not mutually exclusive and both may exist The comprehensive data catalog can be used to search for all assets in the data lake and the HCatalog can be used to discover and query data assets in the data lake Comprehensive Data Catalog The comprehensive data catalog can be created by using standard AWS services like AWS Lambda Amazon DynamoDB and Amazon Elastic search Service (Amazon ES) At a high level Lambda triggers are used to populate DynamoDB ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 7 tables with object names and metadata when those objects are put into Amazon S3; then Amazon ES is used to search for specific assets related met adata and data classifications Figure 3 shows a high level architectural overview of this solution Figure 3 : Comprehensive data catalog using AWS Lambda Amazon DynamoDB and Amazon Elasticsearch Service HCatalog with AWS Glue AWS Glue can be used to create a Hive compatible Metastore Catalog of data stored in an Amazon S3 based data lake To use AWS Glue to build your data catalog register your data sources with AWS Glue in the AWS Management Console AWS Glue will then crawl your S3 buckets for data sources and construct a data catalog using pre built classifiers for many popular source formats and data types including JSON CSV Parquet and more You may also add your own classifiers or choose classifiers from the AWS Glue community to add to your crawls to recognize and catalog other data formats The AWS Glue generated catalog can be used by Ama zon Athena Amazon Redshift Amazon Redshift Spectrum and Amazon EMR as well as third party analytics tools that use a standard Hive Metastore Catalog Figure 4 shows a sample screenshot of the AWS Glue data catalog interface ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 8 Figure 4: Sample AWS Glue data catalog interface Securing Protecting and Managing Data Building a data lake and making it the centralized repository for assets that were previously duplicated and placed across many siloes of smaller platforms and groups of users requires implementing stringent and fine grained security and access controls along with methods to protect and manage the data assets A data lake solution on AWS —with Amazon S3 as its core —provides a robust set of features and services to secure and protect your data against both internal and external threats even in large multi tenant environments Additionally innovative Amazon S3 data management features enable automation and scaling of data lake storage management even when it contains billions of objects and petabytes of data assets Securing your data lake begins with implementing very fine grained controls that allow authorized users to see access process and modify particular assets and ensure that unauthorized users are blocked from taking any actio ns that would compromise data confidentiality and security A complicating factor is that access roles may evolve over various stages of a data asset’s processing and lifecycle Fortunately Amazon has a comprehensive and well integrated set of security fe atures to secure an Amazon S3 based data lake ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 9 Access Policy Options and AWS IAM You can manage access to your Amazon S3 resources using access policy options By default all Amazon S3 resources —buckets objects and related subresources —are private : only the resource owner an AWS account that created them can access the resource s The resource owner can then grant access permissions to others by writing an access policy Amazon S3 access policy options are broadly categorized as resource based policies and user policies Access policies that are attached to resources are referred to as resource based policies Example resource based policies include bucket policies and access control lists (ACLs) Acces s policies that are attached to users i n an account are called user policie s Typically a combination of resource based and user policies are used to manage permissions to S3 buckets objects and other resources For most data lake environments we recommend using user policies so that perm issions to access data assets can also be tied to user roles and permissions for the data processing and analytics services and tools that your d ata lake users will use User policies are associated with AWS Identity and Access Management (IAM) service wh ich allows you to securely control access to AWS services and resources With IAM you can create IAM users groups and roles in account s and then attach access policies to them that grant access to AWS resources including Amazon S3 The model for user policies is show n in Figure 5 For more details and information on securing Amazon S3 with user policies and AWS IAM please reference: Amazon Simple Storage Service Developers Guide and AWS Identity a nd Access Management User Guide Figure 5: Model for user policies ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 10 Data Encryption with Amazon S3 and AWS KMS Although user policies and IAM contr ol who can see and access data in your Amazon S3 based data lake it’s also important to ensure that users who might inadvertently or maliciously manage to gain access to those data assets can ’t see and use them This is accomplished by using encryption keys to encrypt and de encrypt data assets Amazon S3 supports multiple encryption options Additionally AWS KMS helps scale and simplify management of encryption keys AWS KMS gives you centralized control over the encryption keys used to protect your data assets You can create import rotate disable delete define usage policies for and audit the use of encryption keys used to encrypt your data AWS KMS is integrated with several other AWS services making it easy to encrypt the data sto red in these services with encryption keys AWS KMS is integrated with AWS CloudTrail which provides you with the ability to audit who used which keys on which resources and when Data lakes built on AWS primarily use two types of encryption : Server side encryption (SSE) and client side encryption SSE provides data atrest encryption for data written to Amazon S3 With SSE Amazon S3 encrypts user data assets at the object level stores the encrypted objects and then decrypts them as they are accessed and retrieved With client side encryption data objects are encrypted before they written into Amazon S3 For example a data lake user could specify client side encryption before transferring data assets into Amazon S3 from the Internet or could specify that services like Amazon EMR Amazon Athena or Amazon Redshift use client side encryption with Amazon S3 SSE and client side encryption can be combined for the highest levels of protection Given the intricacies of coordinating encryption key management in a complex environment like a data lake we strongly recommend using AWS KMS to coordinate keys across client and server side encryption and across multiple da ta processing and analytics services For even greater levels of data lake data protection other services like Amazon API Gateway Amazon Cognito and IAM can be combined to create a “shopping cart” model for users to check in and check out data lake data assets This architecture has been created for the Amazon S3 based data lake solution reference architecture which can be found downloaded and deployed at https://awsamazonco m/answers/big data/data lake solution/ ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 11 Protecting Data with Amazon S3 A vital function of a centralized data lake is data asset protection —primarily protection against corruption loss and accidental or malicious overwrites modifications or deletions Amazon S3 has several intrinsic features and capabilities to provide the highest levels of data protection when it is used as the core platform for a data lake Data protection rests on the inherent durability of the storage platform used Durability is defined as the ability to protect data assets against corruption and loss Amazon S3 provides 99999999999% data durability which is 4 to 6 orders of magnitude greater than that which most on premise s single site storage platforms can provide Put another way the durability of Amazon S3 is designed so that 10000000 data assets can be reliably stored for 10000 years Amazon S3 achieves this durability in all 16 of its global Regions by using multiple Availability Zones Availability Zones consist of one or more discrete data centers each with redundant power networking and connectivity housed in separate facilities Availability Zones offer the ability to operate production applications and analytics services which are more highly ava ilable fault tolerant and scalable than would be possible from a single data center Data written to Amazon S3 is redundantly stored across three Availability Zones and multiple devices within each Availability Zone to achieve 999999999% durability Thi s means that even in the event of an entire data center failure data would not be lost Beyond core data protection another key element is to protect data assets against unintentional and malicious deletion and corruption whether through users accidenta lly deleting data assets applications inadvertently deleting or corrupting data or rogue actors trying to tamper with data This becomes especially important in a large multi tenant data lake which will have a large number of users many applications and constant ad hoc data processing and application development Amazon S3 provides versioning to protect data assets against these scenarios When enabled Amazon S3 versioning will keep multiple copies of a data asset When an asset is updated prior vers ions of the asset will be retained and can be retrieved at any time If an asset is deleted the last version of it can be retrieved Data asset versioning can be managed by policies to automate management at large scale and can be combined with other Am azon S3 capabilities such as lifecycle management for long term ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 12 retention of versions on lower cost storage tiers such as Amazon Glacier and Multi Factor Authentication (MFA) Delete which requires a second layer of authentication —typically via an approve d external authentication device —to delete data asset versions Even though Amazon S3 provides 99999999999% data durability within an AWS Region many enterprise organizations may have compliance and risk models that require them to replicate their data assets to a second geographically distant location and build disaster recovery (DR) architectures in a second location Amazon S3 cross region replication (CRR) is an integral S3 capability that automatically and asynchronously copies data assets from a data lake in one AWS Region to a data lake in a different AWS Region The data assets in the second Region are exact replicas of the source data assets that they were copied from including their names metadata versions and access controls All data assets are encrypted during transit with SSL to ensure the highest levels of data security All of these Amazon S3 features and capabilities —when combined with other AWS services like IAM AWS KMS Amazon Cognito and Amazon API Gateway —ensure that a data lake using Amazon S3 as its core storage platform will be able to meet the most stringent data security compliance privacy and protection requirements Amazon S3 includes a broad range of certifications including PCI DSS HIPAA/HITECH FedRAMP SEC Rule 17 a4 FISMA EU Data Protection Directive and many other global agency certifications These levels of compliance and protection allow organizations to build a data lake on AWS that operates more securely and with less risk than one b uilt in their on premise s data centers Managing Data with Object Tagging Because data lake solutions are inherently multi tenant with many organizations lines of businesses users and applications using and processing data assets it becomes very important to associate data assets to all of these entities and set policies to manage these assets coherently Amazon S3 has introduced a new capability —object tagging —to assist with categorizing and managing S 3 data assets An object tag is a mutable key value pair Each S3 object can have up to 10 object tags Each tag key can be up to 128 Unicode characters in length and each tag value can be up to 256 Unicode characters in length For an example of object tagging suppose an object contains protected ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 13 health information (PHI) data —a user administrator or application that uses object tags might tag the object using the key value pair PHI=True or Classification=PHI In addition to being used for data classifi cation object tagging offers other important capabilities Object tags can be used in conjunction with IAM to enable fine grain controls of access permissions For example a particular data lake user can be granted permissions to only read objects with s pecific tags Object tags can also be used to manage Amazon S3 data lifecycle policies which is discussed in the next section of this whitepaper A data lifecycle policy can contain tag based filters Finally object tags can be combined with Amazon Cloud Watch metrics and AWS CloudTrail logs —also discussed in the next section of this paper —to display monitoring and action audit data by specific data asset tag filters Monitoring and Optimizing the Data Lake Environment Beyond the efforts required to architect and build a data lake your organization must also consider the operational aspects of a data lake and how to cost effectively and efficiently operate a production data lake at large scale Key elements you must co nsider are monitoring the operations of the data lake making sure that it meets performance expectations and SLAs analyzing utilization patterns and using this information to optimize the cost and performance of your data lake AWS provides multiple fea tures and services to help optimize a data lake that is built on AWS including Amazon S3 s torage analytics A mazon CloudW atch metrics AWS CloudT rail and Amazon Glacier Data Lake Monitoring A key aspect of operating a data lake environment is understand ing how all of the components that comprise the data lake are operating and performing and generating notifications when issues occur or operational performance falls below predefined thresholds Amazon CloudWatch As a n administrator you need to look at t he complete data lake environment holistically This can be achieved using Amazon CloudWatch CloudWatch is a ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 14 monitoring service for AWS Cloud resources and the applications that run on AWS You can use CloudWatch to collect and track metrics collect and monitor log files set thresholds and trigger alar ms This allows you to automatically react to changes in your AWS resources CloudWatch can monitor AWS resources such as Amazon EC2 instances Amazon S3 Amazon EMR Amazon Redshift Amazon DynamoDB and Amazon Relational Database Service ( RDS ) database instances as well as custom metrics generated by other data lake applications and service s CloudWatch provides system wide visibility into resource ut ilization application performa nce and operational health You can use these insights to proactively react to issues and keep your data lake application s and workflows running smoothly AWS CloudTrail An operational data lake has many users and multiple a dministrators and may be subject to compliance and audit requirements so it’ s important to have a complete audit trail of actions take n and who has performed these actions AWS CloudTrail is an AWS service that enables governance compliance operational audi ting and risk auditing of AWS account s CloudTrail continuously monitor s and retain s events related to API calls across the AWS services that comprise a data lake CloudTrail provides a h istory of AWS API calls for an account including A PI calls made through the AWS Management Console AWS SD Ks command line tools and most Amazon S3 based data lake services You can identify which users and accounts made requests or took actions against AWS services that support CloudTrail the source IP address the actions were made from and when the actions occurred CloudTrail can be used to simplify data lake compliance audits by automatically recording and storing activity logs for actions made within AWS accounts Integration with Amazon CloudWatch Logs provides a convenient way to search through log data identify out ofcompliance events accelerate incident investigations and expedite responses to auditor requests CloudTrail logs are stored in an S3 bucket for durability and deeper analysis ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 15 Data Lake Optimiz ation Optimizing a data lake environment includes minimizing operational costs By building a data lake on Amazon S3 you only pay for the data storage and data processing services that you actually use as you use them You can reduce cost s by optimizing how you use these services Data asset storage is often a significant portion of the costs associated with a data lake Fortunately AWS has several features that can be used to optimize and reduce costs these include S3 lifecycle management S3 storage class analy sis and Amazon Glacier Amazon S3 Lifecycle Management Amazon S3 lifecycle management allows you to create lifecycle rules which can be used to automatically migrate data assets to a lower cost tier of storage —such as S3 Standard Infrequent Access or Amazon Glacier —or let them expire when they are no longer needed A lifecycle configuration which consists of an XML file comprises a set of rules with predefined actions that you want Amazon S3 to perform on data assets dur ing their lifetime Lifecycle configurations can perform actions based on data asset age and data asset names but can also be combined with S3 object tagging to perform very granular management of data assets Amazon S3 Storage Class Analy sis One of the c hallenges of developing and configuring lifecycle rules for the data lake is gaining an understanding of how data assets are accessed over time It only makes economic sense to transition data assets to a more cost effective storage or archive tier if thos e objects are infrequently accessed Otherwise data access charges associated with these more cost effective storage classes could negate any potential savings Amazon S3 provides S3 storage class analy sis to help you understand how data lake data assets are used Amazon S3 storage class analy sis uses machine learning algorithms on collected access data to help you develop lifecycle rules that will optimize costs Seamlessly tiering to lower cost storage tiers in an important capability for a data lake particularly as its users plan for and move to more advanced analytics and machine learning capabilities Data lake users will typically ingest raw data assets from many sources and transform those assets into harmonized formats that they can use for ad hoc querying and on going business intelligence ( BI) querying via SQL However they will also want to perform more advanced analytics using streaming analytics machine learning and ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 16 artificial intelligence These more advanced analytics capab ilities consist of building data models validating these data models with data assets and then training and refining these models with historical data Keeping more historical data assets particularly raw data assets allows for better training and refinement of models Additionally as your organization ’s analytics sophistication grows you may want to go back and reprocess historical data to look for new insights and value These historical data assets are infrequently accessed and consume a lot of capacity so they are often well suited to be stored on an archival storage layer Another long term data storage need for the data lake is to keep processed data assets and results for long term retention for compliance and audit purposes to be accessed by auditors when needed Both of these use cases are well served by Amazon Glacier which is an AWS storage service optimized for infrequ ently used cold data and for storing write once read many (WORM) data Amazon Glacier Amazon Glacier is an extremely low cost storage service that provides durable storage with security features for data archiving and backup Amazon Glacier has the same data durability (99999999999%) as Amazon S3 the same integrat ion with AWS security features and can be integrated with S3 by using S3 lifecycle management on data assets stored in S3 so that data assets can be seamlessly migrated from S3 to Glacier Amazon Glacier is a great storage choice when low storage cost is paramount data assets are rarely retrieved and retrieval latency of several minutes to several hours is acceptable Different types of data lake assets may have different retrieval needs For example compliance data may be infrequently accesse d and relatively small in size but need s to be made available in minutes when auditors request data while historical raw data assets may be very large but can be retrieved in bulk over the course of a day when needed Amazon Glacier allows data lake user s to specify retrieval times when the data retrieval request is created with longer retrieval times leading to lower retrieval costs For processed data and records that need to be securely retained Amazon Glacier Vault Lock allows data lake administrato rs to easily deploy and enforce compliance controls on individual Glacier vaults via a lockable policy Administrators can specify controls such as Write Once Read Many (WORM) in ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 17 a Vault Lock policy and lock the policy from future edits Once locked the p olicy becomes immutable and Amazon Glacier will enforce the prescribed controls to help achieve your compliance objectives and provide an audit trail for these assets using AWS CloudTrail Cost and Performance Optimization You can optimize your data lake using cost and performance Amazon S3 provides a very performant foundation for the data lake because its enormous scale provides virtually limitless throughput and extremely high transaction rates Using Amazon S3 best practices for data asset naming ensures high levels of performance These best practices can be found in the Amazon Simple Storage Service Developers Guide Another area of o ptimization is to use optimal data formats when transforming raw data assets into normalized formats in preparation for querying and analytics These optimal data formats can compress data and reduce data capacities needed for storage and also substantially increase query performance by common Amazon S3 based data lake analytic services Data lake environments are designed to ingest and process many types of data and store raw data assets for future archival and reprocessing purposes as well as store processed and normal ized data assets for active querying analytics and reporting One of the key best practices to reduce storage and analytics processing costs as well as improve analytics querying performance is to use an optimized data format par ticularly a format lik e Apache Parquet Parquet is a columnar compressed storage file format that is designed for querying large amounts of data regardless of the data processing framework data model or programming language Compared to common raw data log formats like CSV JSON or TXT format Parquet can reduce the required storage footprint improve query performance significantly and greatly reduce querying costs for AWS services which charge by amount of data scanned Amazon tests comparing the CSV and Parquet format s using 1 TB of log data stored in CSV format to Parquet format showed the following : • Space savings of 87% with Parquet ( 1 TB of log data stored in CSV format compressed to 130 GB with Parquet) ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 18 • A query time for a representative Athena query was 34x faster with Parquet (237 seconds for CSV versus 513 seconds for Parquet) and the amount of data scanned for that Athena query was 99% less (115TB scanned for CSV versus 269GB for Parquet) • The cost t o run that Athena query was 997% less ($575 for CSV versus $0013 for Parquet) Parquet has the additional benefit of being an open data format that can be used by multiple querying and analytics tools in an Amazon S3 based data lake particularly Amazon Athena Amazon EMR Amazon Redshift and Amazon Redshift Spectrum Transforming Data Assets One of the core values of a data lake is that it is the collection point and repository for all of an organization’s data assets in whatever their native formats a re This enables quick ingest ion elimination of data duplication and data sprawl and centralized governance and management After the data assets are collected they need to be transformed into normalized formats to be used by a variety of data analytics and processing tools The key to ‘democratizing’ the data and making the data lake available to the widest number of users of varying skill sets and responsibilities is to transform data assets into a format that allows for efficient ad hoc SQL querying As discussed earlier when a data lake is built on AWS we recommend transforming log based data assets into Parquet format AWS provides multiple services to quickly and efficiently achieve this There are a multitude of ways to transform data assets and the “best” way often comes down to individual preference skill sets and the tools available When a data lake is built on AWS services there is a wide variety of tools and services available fo r data transformation so you can pick the methods and tools that you are most comfortable with Since the data lake is inherently multi tenant multiple data transformation jobs using different tools can be run concurrently The two most common and strai ghtforward methods to transform data assets into Parquet in an Amazon S3 based data lake use Amazon EMR clusters The first method involves creating an EMR cluster with Hive installed using the raw data assets in Amazon S3 as input transforming those data assets into Hive ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 19 tables and then writing those Hive tables back out to Amazon S3 in Parquet format The second related method is to use Spark on Amazon EMR With this method a typical transformation can be achieved with only 20 lines of PySpark code A third simpler data transformation method on an Amazon S3 based data lake is to use AWS Glue AWS Glue is an AWS fully managed extract transform and load ( ETL ) service that can be directly used with data stored in Amazon S3 AWS Glue simplifies and automates difficult and time consuming data discovery conversion mapping and job sched uling tasks AWS Glue guides you through the process of transforming and moving your data assets with an ea sy touse console that helps you understand your data sources transform and prepare the se data assets for analytics and load them reliably from S3 data sources back into S3 destinations AWS Glue automatically crawls raw data assets in your data lake ’s S3 buckets identifies data formats and then suggests schemas and transformations so that you don’t have to spend time hand coding data flows You can then edit these transformations if necessary using the tools and technologies you already know such as Python Spark Git and your favorite integ rated developer environment (IDE) and then share them with other AWS Glue users of the data lake AWS Glue’s flexible job scheduler can be set up to run data transformation flows on a recurring basis in response to triggers or even in response to AWS Lambda events AWS Glue automatically and transparently provisions hardware resources and distributes ETL jobs on Apache Spark nodes so that ETL run times remain consistent as data volume grows AWS Glue coordinates the execution of data lake jobs in the ri ght sequence and automatically re tries failed jobs With AWS Glue t here are no servers or clusters to manage and you pay only for the resources consumed by your ETL jobs InPlace Querying One of the most important capabilities of a data lake that is built on AWS is the ability to do in place transformation and querying of data assets without having to provision and manage clusters This allows you to run sophisticated analytic queries direc tly on your data assets stored in Amazon S3 without having to copy and load data into separate analytics platforms or data warehouses You ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 20 can query S3 data without any additional infrastructure and you only pay for the queries that you run This makes t he ability to analyze vast amounts of unstruc tured data accessible to any data lake user who can use SQL and makes it far more cost effective than the traditional method of performing an ETL process creating a Hadoop cluster or data warehouse loading th e transformed data into these environments and then running query jobs AWS Glue as described in the previous sections provides the data discovery and ETL capabilities and Amazon Athena and Amazon Redshift Spectrum provide the inplace querying capabilities Amazon Athena Amazon Athena is an interactive query service that makes it easy for you to analyze data directly in Amazon S3 using standard SQL With a few actions in the AWS Management Console you can use Athena directly against data assets stored in the data lake and begin using standard SQL to run ad hoc queries and get results in a mat ter of seconds Athena is serverless so there is no infrastructure to set up or manage and you only pay for the volume of data assets scanned during the queries you run Athena scales automatically —executing queries in parallel —so results are fast even with large datasets and complex queries You can use Athena to process unstructured semi structured and structured data sets Supported data asset formats include CSV JSON or columnar data formats such as Apache Parquet and Apache ORC Athena integrate s with Amazon QuickSight for easy visualization It can also be used with third party reporting and business intelligence tools by connecting these tools to Athena with a JDBC driver Amazon Redshift Spectrum A second way to perform in place querying of da ta assets in an Amazon S3 based data lake is to use Amazon Redshift Spectrum Amazon Redshift is a large scale managed data warehouse service that can be used with data assets in Amazon S3 However data assets must be loaded into Amazon Redshift before q ueries can be run By contrast Amazon Redshift Spectrum enables you to run Amazon Redshift SQL queries directly against massive amounts of data — up to exabytes —stored in an Amazon S3 based data lake Amazon Redshift Spectrum applies sophisticated query opt imization scaling processing across thousands of nodes so results are fast —even with large data sets and complex ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 21 queries Redshift Spectrum can directly query a wide variety of data assets stored in the data lake including CSV TSV Parquet Sequence and RCFile Since Redshift Spectrum supports the SQL syntax of Amazon Redshift you can run sophisticated queries us ing the same BI tools that you use today You also have the flexibility to run queries that span both frequently accessed data assets that are s tored loca lly in Amazon Redshift and your full da ta sets stored in Amazon S3 Because Amazon Athena and Amazon R edshift share a common data catalog and common data formats you can use both Athena and Redshift Spectrum against the same data assets You would typically use Athena for ad hoc data discovery and SQL querying and then use Redshift Spectrum for more comp lex queries and scenarios where a large number of data lake users want to run concurrent BI and reporting workloads The Broader Analytics Portfolio The power of a data lake built on AWS is that data assets get ingested and stored in one massively scalable low cost performant platform —and that data discovery transformation and SQL querying can all be done in place using innovative AWS services like AWS Glue Amazon Athena and Amazon Redshift Spectrum In addition there are a wide variety of other AWS services that can be directly integrated with Amazon S3 to create any number of sophisticated analytics machine learning and artificial intelligence (AI) data processing pipelines This allows you to quickly solve a wide range of analytics business challenges on a single platform against common data assets without having to worry about provisioning hardware and installing and configuring complex software packages before loading data and performin g analytics Plus you only pay for what you consume Some of the most common AWS services that can be used with data assets in an Amazon S3 based data lake are described next Amazon EMR Amazon EMR is a highly di stributed computing framework used to quick ly and easily process data in a cost effective manner Amazon EMR uses Apache Hadoop an open sour ce framework to distribute data and processing across a n elastically resizable cluster of EC2 instances and allows you to use all the common Hadoop tools suc h as Hive Pig Spark and HBase Amazon EMR does all the heavily lifting involved with provisioning managing and maintaining the infrastructure a nd software of a Hadoop cluster and is integrated directly with Amazon S3 With Amazon EMR you can launch a persistent cluster that stays ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 22 up indefinitely or a temporary cluster that terminates after the analysis is complete In either scenario you only pay for the hours the cluster is up Amazon EMR supports a variety of EC2 instance types encompassing genera l purpose compute memory and storage I/O optimized (eg T2 C4 X1 and I3 ) instances and all Amazon EC2 pricing options (On Demand Reserved and Spot) When you launch an EMR cluster (also called a job flow ) you choose how many and what type of EC2 instances to provision Companies with many different lines of business and a large number of users can build a single data lake solution store their data assets in Amazon S3 and then spin up multiple EMR clusters to share data assets in a multi tenant fashion Amazon Machine Learning Machine learning is another important data lake use case Amazon Machine Learning (ML) is a data lake service that makes it easy for anyone to use predictive analytics and machine learnin g technology Amazon ML provides visualization tools and wizards to guide you through the process of creating ML models without having to learn complex algorithms and technology After the models are ready Amazon ML makes it easy to obtain predictions for your application using API operations You don’ t have to implement custom prediction generation code or manage any infrastructure Amazon ML can create ML models based on data stored in Amazon S3 Amazon Redshift or Amazon RDS Built in wizards guide you through the steps of interactively exploring your data training the ML model evaluating the model quality and adjusting outputs to align with business goals Af ter a model is ready you can request predictions either in batches or by using the low latency real time API As discussed earlier in this paper a data lake built on AWS greatly enhances machine learning capabilities by combining Amazon ML with large historical data sets than can be cost effectively stored on Amazon Glacier but can be easily recalled when needed to train new ML models Amazon QuickSight Amazon QuickSight is a very fast easy touse business analytics service that makes it easy for you to build visualizations perform ad hoc analysis and quickly get business insights from your data assets store d in the data lake anytime on any device You can use Amazon QuickSight to seamlessly discover AWS data sources such as Amazon Redshift Amazon RDS Amazon Auror a Amazon Athena and Amazon S3 connect to any or all of these data source s and ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 23 data assets and get insights from this data in minutes Amazon QuickSight enables organizations using the data lake to seamlessly scale their business analytics capabilities to hundreds of thousands of users It delivers fast and responsive query performance by using a robust in memory engine (SPICE) Amazon Rekognition Another innovative data lake service is Amazon Rekognition which is a fully managed image recognition service powered by deep learning run agai nst image data assets stored in Amazon S3 Amazon Rekognition has been built by Amazon’s Computer Vision teams over many years and already analyzes billions of images every day The Amazon Rekognition easy touse API detects thousands of objects and scene s analyzes faces compares two faces to measure similarity and verifies faces in a collection of faces With Amazon Rekognition you can easily build applications that search based on visual content in images analyze face attributes to identify demograp hics implement secure face based verification and more Amazon Rekognition is built to analyze images at scale and integrates seamlessly with data assets stored in Amazon S3 as well as AWS Lambda and other key AWS services These are just a few examples of power ful data processing and analytics tools that can be integrated with a data lake built on AWS See the AWS website for more examples and for the latest list of innovative AWS services available for data lake users Future Proofing the Data Lake A data lake built on AWS can immediately solve a broad r ange of business analytics challenges and quickly provide value to your business H owever business needs are constantly evolving AWS and the analytics partner ecosystem are rapidly evolving and adding new services and capabilities a s businesses and their data lake users achieve more experience and analytics sophistication over time Therefore it’s important that the data lake can seamlessly and non disruptively evolve as needed AWS futureproofs your data lake with a standardized storage solution that grows with your organization by ingesting and storing all of your business’ s data assets on a platform with virtually unlimited scalability and well defined APIs and integrat es with a wide variety of data processing tools This allow s you to ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 24 add new capabilities to your data lake as you need them without infrastructure limitations or barriers Additionally you can perform agile analytics experiments against data lake assets to quickly explore new processing methods and tools and then scale the promising ones into production without the need to build new infrastructure duplicate and/or migrate data and have users migrate to a new platform In closing a data lake built on AWS allows you to evolve your business around your data assets and to use these data assets to quickly and agilely drive more business value and competitive differentiation without limits Contributors The following individuals and organizations co ntributed to this document: • John Mallory Business Development Manager AWS Storage • Robbie Wright Product Marketing Manager AWS Storage Document Revisions Date Description July 2017 First publication Archived
General
Optimizing_Multiplayer_Game_Server_Performance_on_AWS
Optimizing Multiplayer Game Server Performance on AWS April 201 7 Archived This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers© 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedContents Introduction 1 Amazon EC2 Instance Type Considerations 1 Amazon EC2 Compute Optimized Instance Capabilities 2 Alternative Compute Instance Options 3 Performance Optimization 3 Networking 4 CPU 13 Memory 27 Disk 34 Benchmarking and Testing 34 Benchmarking 34 CPU Performance Analysis 36 Visual CPU Profiling 36 Conclusion 39 Contributors 40 ArchivedAbstract This whitepaper discusses the exciting use case of running multiplayer game servers in the AWS Cloud and the optimizations that you can make to achieve the highest level of performance In this whitepaper we provide you the information you need to take advantage of the Amazon Elastic Compute Cloud (EC2) family of instances to get the peak performance required to successfully run a multiplayer game server on Linux in AWS This paper is intended for technical audiences that have experience tuning and optimizing Linuxbased servers ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 1 Introduction Amazon Web Services (AWS) provides benefits for every conceivable gaming workload including PC/console single and multiplayer games as well as mobile based socialbased and webbased games Running PC/console multiplayer game servers in the AWS Cloud is particularly illustrative of the success and cost reduction that you can achieve with the cloud model over traditional on premises data centers or colocations Multiplayer game servers are based on a client/server network architecture in which the game server holds the authoritative source of events for all clients (players) Typically after p layers send their actions to the server the server runs a simulation of the game world using all of these actions and sends the results back to each client With Amazon Elastic Compute Cloud (Amazon EC2) you can create and run a virtual server (called an instance ) to host your client/server multiplayer game1 Amazon EC2 provides resizable compute capacity and supports Single Root I/O Virtualization (SRIOV) high frequency processors For the compute family of instances Amazon EC2 will support up to 72 vCPUs (36 physical cores) when we launch the C5 computeoptimized instance type in 2017 This whitepaper discusses how to optimize your Amazon EC2 Linux multiplayer game server to achieve the best performance while maintaining scalability elasticity and global reach We start with a brief description of the performance capabilities of the compute optimized instance family and then dive into optimization techniques for networking CPU memory and disk Finally we briefly cover benchmarking and testing Amazon EC2 Instance Type Considerations To get the maximum performance out of an Amazon EC2 instance it is important to look at the compute options available In this section we discuss the capabilities of the Amazon EC2 compute optimized instance family that make it ideal for multiplayer game servers ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 2 Amazon EC2 Compute Optimized Instance Capabilities The current generation C4 compute optimized instance family is ideal for running yo ur multiplayer game server2 (The C5 instance type announced at AWS re:Invent 2016 will be the recommended game server platform when it launches) C4 instances run on hardware using the Intel Xeon E52666 v3 (Haswell) processor This is a custom processor designed specifically for AWS The following table lists the capabilities of each instance size in the C4 family Instance Size vCPU Count RAM (GiB) Network Performance EBS Optimized: Max Bandwidth (Mbps) c4large 2 375 Moderate 500 c4xlarge 4 75 Moderate 750 c42xlarge 8 15 High 1000 c44xlarge 16 30 High 2000 c48xlarge 36 60 10 Gbps 4000 As the table shows the c48xlarge instance provides 36 vCPUs Since each vCPU is a hyperthread of a full physical CPU core you get a total of 18 physical cores with this instance size Each core runs at a base of 29 GHz but can run at 32 GHz all core turbo (meaning that each core can run simultaneously at 32 GHz even if all the cores are in use ) and at a max turbo of 35 GHz (possible when only a few cores are in use) We recommend t he c44xlarge and c48xlarge instance sizes for running your game server because they get exclusive access to one or both of the two underlying processor sockets respectively Exclusive access guarantees that you get a 32 GHz all core turbo for most workloads The primary exception is for applications running Advanced Vector Extension (AVX) workloads 3 If you run AVX workloads on the c48xlarge instance the best you can expect in most cases is 31 GHz when running three cores or less It is important to test your specific workload to verify the performance you can achieve The following table shows a comparison between the c44xlarge instances and the c48xlarge instances for AVX and nonAVX workloads ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 3 C4 Instance Size and Workload Max Core Turbo Frequency (GHz) All Core Turbo Frequency (GHz) Base Frequency (GHz) C48xlarge – non AVX workload 35 (when fewer than about 4 vCPUs are active) 32 29 C48xlarge – AVX workload ≤ 33 ≤ 31 depending on the workload and number of active cores 25 C44xlarge – non AVX workload 32 32 29 C44xlarge – AVX workload 32 ≤ 31 depending on the workload and number of active cores 25 Alternative Compute Instance Options There are situations for example for some roleplaying games (RPGs) and multiplayer online battle arenas (MOBAs) where your game server can be more memory bound than compute bound In these cases the M4 instance type may be a better option than the C4 instance type since it has a higher memory to vCPU ratio The compute optimized instance family has a higher vCPU to memory ratio than other instance families while the M4 instance has a higher memory to vCPU ratio M4 instances use a Haswell processor for the m410xlarge and m416xlarge size s; smaller sizes use either a Broadwell or a Haswell processor The M4 instance type is similar to the C4 instance type in networking performance and has plenty of bandwidth for game servers Performance Optimization There are many performance options for Linux servers with networking and CPU being the two most important This section documents the performance options that AWS gaming customers have found the most valuable and /or the options that are the most appropriate for running game servers on virtual machines (VMs) The performance options are categorized into four sections: networking CPU memory and disk This is not an allinclusive list of performance tuning options and not all of the options will be appropriate for every gaming workload We strongly recommend testing these settings before implementing them in production ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 4 This section assumes that you are running your instance in a VPC created with Amazon Virtual Private Cloud (VPC)4 that uses an Amazon Machine Image (AMI)5 with a hardware virtual machine (HVM) All of the instructions and settings that follow have been verified on the Amazon Linux AMI 201609 using the 44 233154 kernel but they should work with all future releases of Amazon Linux Networking Networking is one of the most important areas for performance tuning Multiplayer client/server games are extremely sensitive to latency and dropped packets A list of performance tuning options for networking is provided in the following table Performance Tuning Option Summary Notes Links or Commands Deploying game servers close to players Proximity to players is the best way to reduce latency AWS has numerous Regions across the globe List of AWS Regions Enhanced networking Improved networking performance Nearly every workload should benefit No downside Linux /Windows UDP Receive buffers Helps prevent dropped packets Useful when the latency bet ween client and server is high Little downside but should be tested Add the following to /etc/sysctlconf: netcorermem_default = New_Value netcorermem_max = New_Value (Recommend start by doubling the current values set for your system ) Busy polling Reduce latency of incoming packet processing Can increase CPU utilization Add the following to /etc/sysctlconf: netcorebusy_read = New_Value netcore busy_poll = New_Value (Recommend testing a value of 50 first then 100 ) ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 5 Performance Tuning Option Summary Notes Links or Commands Memory Helps prevent dropped packets Add the following to /etc/sysctlconf: netipv4udp_mem = New_Value New_Value New_Value (Recommend doubling the current values set for your system) Backlog Helps prevent dropped packets Add the following to /etc/sysctlconf: netcorenetdev_max_backlog= New_Value (Recommend doubling the current values set for your system) Transmit and receive queues Possible performance boost by disabling hyperthreading The following recommendations cover how to reduce latency avoid dropped packets and obtain optimal networking performance for your game servers Deploying Game Servers Close to Players Deploying your game servers as close as possible to your players is a key element for good player experience AWS has numerous Regions across the world which allows you to deploy your game servers close to your players For the most current list of AWS Regions and Availability Zones see https://awsamazoncom/aboutaws/globalinfrastructure/ 6 You can package your instance AMI and deploy it to as many Regions as you choose Customers often deploy AAA PC/ console games in almost every available Region As you determine where your players are globally you can decide where to deploy your game servers to provide the best experience possible Enhanced Networking Enhanced networking is another performance tuning option7 Enhanced networking uses single root I/O virtualization (SRIOV) and exposes the ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 6 network card directly to the instance without needing to go through the hypervisor8 This allows for general ly higher I/O performan ce lower CPU utilization higher packets per second (PPS) performance lower interinstance latencies and very low network jitter The performance improvement provided by enhanced networking can make a big difference for a multiplayer game server Enhanced networking is only available for instances running in a VPC using an HVM AMI and only for certain instance types such as the C4 R4 R3 I3 I2 M4 and D2 These instance types use the Intel 82599 Virtual Function Interface (which uses the “ixgbevf” Linux driver ) In addition the X1 R4 P2 and M416xlarge (and soon the C5) instances support enhanced networking using the Elastic Network Adapter (ENA) The Amazon Linux AMI includes these necessary drivers by default Follow the Linux or Windows instructions to install the driver for other AMIs9 10 It is important to have the latest ixgbevf driver which can be downloaded from Intel’s website 11 The minimum recommended version for the ixgbevf driver is version 2142 To check the driver version running on your instance run the following command: ethtool i eth0 User Datagram Protocol ( UDP ) Most firstperson shooter games and other similar client/server multiplayer games use UDP as the protocol for communication between clients and game servers The following sections lay out four UDP optimizations that can improve performance and reduce the occurrence of dropped packets Receive Buffers The first UDP optimization is to increase the default value for the receive buffers Having too little UDP buffer space can cause the operating system kernel to discard UDP packets resulting in packet loss Increasing this buffer space can be helpful in situations where the latency between the client and server is high The default value for both rmem_default and rmem_max on Amazon Linux is 212992 ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 7 To see the current default values for your system run the following commands: cat /proc/sys/net/core/rmem_default cat /proc/sys/net/core/rmem_max A common approach to allocating the right amount of buffer space is to first double both values and then test the performance difference this makes for your game server Depending on the results you may need to decrease or increase these values Note that the rmem_default value should not exceed the rmem_max value To configure these parameters to persist across reboots set the new rmem_default and rmem_max values in the /etc/sysctlconf file: netcorermem_default = New_Value netcorermem_max = New_Value Whenever making changes to the sysctlconf file you should run the following command to refresh the configuration: sudo sysctl p Busy Polling A second UDP optimization is busy polling which can help reduce network receive path latency by having the kernel poll for incoming packets This will increase CPU utilization but can reduce delays in packet processing On most Linux distributions including Amazon Linux busy polling is disabled by default We recommend that you start with a value of 50 for both busy_read and busy_poll and then test what difference this makes for your game server Busy_read is the number of microseconds to wait for packets on the device queue for socket reads while busy_poll is the number of microseconds to wait for packets on the device queue for socket poll and selects Depending on the results you may need to increase the value to 100 ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 8 To configure these parameters to persist across reboots add the new busy_read and busy_poll values to the /etc/sysctlconf file: netcorebusy_read = New_Value netcorebusy_poll = New_Value Again run the following command to refresh the configuration after making changes to the sysctlconf file: sudo sysctl p UDP Buffers A third UDP optimization is to change how much memory the UDP buffers use for queueing The udp_mem option configures the number of pages the UDP sockets can use for queueing This can help reduce dropped packets when th e network adaptor is very busy This setting is a vector of three values that are measured in units of pages (4096 bytes) The first value called min is the minimum threshold before UDP moderates memory usage The second value called pressure is the memory threshold after which UDP will moderate the memory consumption The final value called max is the maximum number of pages available for queueing by all UDP sockets By default Amazon Linux on the c48xlarge instance uses a vector of 1445727 1927636 2891454 while the c44xlarge instance uses a vector of 720660 960882 1441320 To see the current default value s run the following command: cat /proc/sys/net/ipv4/udp_mem A good first step when experimenting with new values for this setting is to double the values and then test what difference this makes for your game server It is also good to adjust the values so they are multiples of the page size (4096 bytes) To configure these parameters to persist across reboots add the new UDP buffer values to the /etc/sysctlconf file: ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 9 netipv4udp_mem = New_Value New_Value New_Value Run the following command to refresh the configuration after making changes to the sysctlconf file: sudo sysctl p Backlog The final UDP optimization that can help reduce the chance of dropped packets is to increase the backlog value This optimization will increase the queue size for incoming packets for situations where the interface is receiving packets at a faster rate than the kernel can handle On Amazon Linux the default value of the queue size is 1000 To check the default value run the following command: cat /proc/sys/net/core/netdev_max_backlog We recommend that you double the default value for your system and then test what difference this makes for your game server To configure these parameters to persist across reboots add the new backlog value to the /etc/sysctlconf file: netcorenetdev_max_ backlog = New_Value Run the following command to refresh the configuration after making changes to the sysctlconf file: sudo sysctl p Transmit and Receive Queues Many game servers put more pressure on the network through the number of packets per second being processed rather than on the overall bandwidth used ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 10 In addition I/O wait can become a bottleneck if one of the vCPUs gets a large volume of interrupt requests (IRQs) Receive Side Scaling (RSS) is a common method used to address these networking performance issues12 RSS is a hardware option that can provide multiple receive queues on a network interface controller (NIC) For Amazon Elastic Compute Cloud (Amazon EC2) the NIC is called an Elastic Network Interface (ENI)13 RSS is enabled on the C4 instance family but changes to the configuration of RSS are not allowed The C4 instance family provides two receive queues for all of the instance sizes when using Linux Each of these queues has a separate IRQ number and is mapped to a separate vCPU Running the command $ ls 1 /sys/class/net/eth0/queues on a c48xlarge instance displays the following queues: $ ls l /sys/class/net/eth0/queues total 0 drwxrxrx 2 root 0 Aug 18 21:00 rx 0 drwxrxrx 2 root root 0 Aug 18 21:00 rx 1 drwxrxrx 3 root root 0 Aug 18 21:00 tx 0 drwxrxrx 3 root root 0 Aug 18 21:00 tx 1 To find out which IRQs are being used by the queues and how the CPU is handling those interrupts run the following command: cat /proc/interrupts Alternatively run this command to output the IRQs for the queues: echo eth0; grep eth0 TxRx /proc/interrupts | awk '{printf " %s\n" $1}' What follows is the reduced output when viewing the full contents of /proc/interrupts on a c48xlarge instance showing just the eth0 interrupts The first column is the IRQ for each queue The last two columns are the process ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 11 information In this case you can see the TxRx0 and TxRx1 are using IRQs 267 and 268 respectively CPU0 CPU23 CPU33 267 634 2789 0 xenpirqmsix eth0TxRx0 268 600 0 2587 xenpirqmsix eth0TxRx1 To verify which vCPU the queue is sending interrupts to run the following commands (replacing IRQ_Number with the IRQ for each TxRx queue): $ cat /proc/irq/ 267/smp_affinity 00000000000000000000000000800000 $ cat /proc/irq/ 268/smp_affinity 00000000000000000000000200000000 The previous output is from a c48xlarge instance It is in hex and needs to be converted to binary to find the vCPU number For example the hex value 00800000 converted to binary is 00000000100000000000000000000000 Counting from the right and starting at 0 you get to vCPU 23 The other queue is using vCPU 33 Because vCPUs 23 and 33 are on different processor sockets they are physically on different nonuniform memory access (NUMA) nodes One issue here is that each vCPU is by default a hyperthread (but in this particular case they are each hyperthreads of the same core) so a performance boost could be seen by tying each queue to a physical core The IRQs for the two queues on Amazon Linux on the C4 instance family are already pinned to particular vCPUs that are on separate NUMA nodes on the c48xlarge instance This default state may be ideal for your game servers However it is important to verify on your distribution of Linux that there are two queues that are configured for IRQs and vCPUs (which are on separate NUMA nodes) On C4 instance sizes other than the c48xlarge NUMA is not an issue since the other sizes only have one NUMA node One option that could improve performance for RSS is to disable hyperthreading If you disable hyperthreading on Amazon Linux then by ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 12 defau lt the queues will be pinned to physical cores (which will also be on separate NUMA nodes on the c48xlarge instance) See the Hyperthreading section in this whitepaper for more information on how to disable hyperthreadi ng If you don’t pin game server processes to cores you could prevent the Linux scheduler from assigning game server processes to the vCPUs (or cores) for the RSS queues To do this you need to configure two options First in your text editor edit the /boot/grub/grubconf file For the first entry that begins with “kernel” (there may be more than one kernel entry you only need to edit the first one) add isolcpus=NUMBER at the end of the line where NUMBER is the number of the vCPUs for the RSS queues For example if the queues are using vCPUs 3 and 4 replace NUMBER with “34” # created by imagebuilder default=0 timeout=1 hiddenmenu title Amazon Linux 201409 (31426 2446amzn1x86_64) root (hd00) kernel /boot/vmlinuz 31426 2446amzn1x86_64 ro ot=LABEL=/ console=ttyS0 isolcpu s=NUMBER initrd /boot/initramfs 31426 2446amzn1x86_64img Using isolcpus will prevent the scheduler from running the game server processes on the vCPUs you specify The problem is that it will also prevent irqbalance from assigning IRQs to these vCPUs To fix this you need to use the IRQBALANCE_BANNED_CPUS option to ban all of the remaining CPUs Version 1110 or later of irqbalance on current versions of Amazon Linux prefers the IRQBALANCE_BANNED_CPUS option and will assign IRQs to the vCPUs specified in isolcpus in order to honor the vCPUs specified by IRQBALANCE_BANNED_CPUS Therefore for example if you isolated vCPUs 34 using isolcpus you would then need to ban the other vCPUs on the instance using IRQBALANCE_BANNED_CPUS To do this you need to use the IRQBALANCE_BANNED_CPUS option in the /etc/sysconfig/ir qbalance file This is a 64bit hexadecimal bit mask The best way to find the value would be to write out the vCPUs you want to include in ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 13 this value in decimal format and then convert to hex So in the earlier example where we used isolcpus to exclude vCPUs 34 we would then want to use IRQBALANCE_BANNED_CPUS to exclude vCPUs 1 2 and 514 (assuming we are on a c44xlarge instance) which would be 1111111111100111 in decimal and finally FFE7n when converted to hex Add the following line to the /etc/sysconfig/irqbalance file using your favorite editor: IRQBALANCE_BANNED_CPUS=” FFE7n” The result is that vCPUs 3 and 4 will not be used by the game server processes but will be used by the RSS queues and a few other IRQs used by the system Like everything else all of these values should be tested with your game server to determine what the performance difference is Bandwidth The C4 instance family offers plenty of bandwidth for a multiplayer game server The c44xlarge instance provides high network performance and up to 10 Gbps is achievable between two c48xlarge instances (or other large instance sizes like the m410xlarge) that are using enhanced networking and are in the same placement group 14 The bandwidth provided by both the c44xlarge and c48xlarge instances has been more than sufficient for every game server use case we have seen You can easily determine the networking performance for your workload on a C4 instance compared to other instances in the same Availability Zone other instances in another Availability Zone and most importantly to and from the Internet Iperf is probably one of the best tools for determining network performance on Linux15 while Nttcp is a good tool for Win dows16 The previous links also provide instructions on doing network performance testing Outside of the placement group you need to use a tool like Iperf or Nttcp to determine the exact network performance achievable for your game server CPU CPU is one of the two most important performancetuning areas for game servers ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 14 Performance Tuning Option Summary Notes Links or Commands Clock Source Using tsc as the clock source can improve performance for game servers Xen is the default clocksource on Amazon Linux Add the following entry to the kernel line of the /boot/grub/grubconf file: tsc=reliable clocksource=tsc CState and PState Cstate and P state options are optimized by default except for the C state on the c48xlarge Setting Cstate to C1 on the c48xlarge should improve CPU performance Can only be changed on the c48xlarge Downside is that 35 GHz max turbo will not be available However the 32 GHz all core turbo will be available Add the following entry to the kernel line of the /boot/g rub/grubconf file: intel_idlemax_cstate=1 Irqbalance When not pinning game servers to vCPUs irqbalance can help improve CPU performance Installed and running by default on Amazon Linux Check your distribution to see if this is running NA Hyperthrea ding Each vCPU is a hyperthread of a core Performance may improve by disabl inghyperthrea ding Add the following entry to the kernel line of the /boot/grub/grubconf file: Maxcpus=X (where X is the number of actual cores in the instance) CPU Pinning Pinning the game server process to vCPU can provide benefits in some situations CPU pinning does not appear to be a common practice among game companies "numactl physcpubind $phys_cpu_core membind $associated_numa_node /game_server_executable" Linu x Scheduler There are three particular Linux scheduler configuration options that can help with game servers sudo sysctl w 'kernelsched_min_granularity_ns= New _Value ' (Recommend start by doubling the current value set for your system) sudo sysctl w 'kernelsched_wakeup_granularity_ns= New_Value ' sudo sysctril –w (Recommend start by halving the current value set for your system) 'kernelsched_migration_cost_ns= New _Value ' ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 15 Performance Tuning Option Summary Notes Links or Commands (Recommend start by doubling the current value set for your system) Clock Source A clock source gives Linux access to a timeline so that a process can determine where it is in time Time is extremely important when it comes to multiplayer game servers given that the server is the authoritative source of events and yet each client has its own view of time and the flow of events The kernelorg web site has a good introduction to clock sources17 To find the current clock source: $cat /sys/devices/system/clock source/clocksource0/current_clocksource By default on a C4 instance running Amazon Linux this is set to xen To view the available clock sources: cat /sys/devices/system/clocksource/clocksource0/available_clocksource This list should show xen tsc hpet and acpi_pm by default on a C4 instance running Amazon Linux For most game servers the best clock source option is TSC (Time Stamp Counter) which is a 64bit register on each processor I n most cases TSC is the fastest highestprecision measurement of the passage of time and is monotonic and invariant See this xenorg article for a good discussion about TSC when it comes to XEN virtualization18 Synchronization is provided across all processors in all power states so TSC is considered synchronized and invariant This means that TSC will increment at a constant rate TSC can be accessed using the rdtsc or rdtscp instructions Rdtscp is often a better option than rdtsc since rdtscp takes into account that Intel processors ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 16 sometimes use out oforder execution which can affect getting accurate time readings The recommendation for game servers is to change the clock source to TSC However you should test this thoroughly for your workloads To set the clock source to TSC edit the /boot/grub/grubconf file with your editor of choice For the first entry that begins with “kernel” (note that there may be more than one kernel entry you only need to edit the first one) add tsc=reliable clocksource=tsc at the end of the line # created by imagebuilder default=0 timeout=1 hiddenmenu title Amazon Linux 201409 (31426 2446amzn1x86_64) root (hd00) kernel /boot/vmlinuz 31426 2446amzn1x86_64 root=LABEL=/ console=ttyS0 tsc=reliable clocksource=tsc initrd /boot/initramfs 31426 2446amzn1x86_64img Processor State Control (CStates and PStates) Processor State Controls can only be modified on the c48xlarge instance (also configurable on the d28xlarge m410xlarge and x132xlarge instances )19 C states control the sleep levels that a core can enter when it is idle while Pstates control the desired performance (in CPU frequency) for a core Cstates are idle power saving states while Pstates are execution power saving states Cstates start at C0 which is the shallowest state where the core is actually executing functions and go to C6 which is the deepest state where the core is essentially powered off The default Cstate for the c48xlarge instance is C6 For all of the other instance sizes in the C4 family the default is C1 This is the reason that the 35 GHz max turbo frequency is only available on the c48xlarge instance Some vCPUs need to be in a deeper sleep state than C1 in order for the cores to hit 35 GHz An option on the c48xlarge instance is to set C1 as the deepest Cstate to prevent the cores from going to sleep That reduces the processor reaction latency but also prevents the cores from hitting the 35 GHz Turbo Boost if only a few cores are active; it would still allow the 32 GHz all core turbo Therefore ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 17 you would be trading the possibility of achieving 35 GHz when a few cores are running for the reduced reaction latency Your results will depend on your testing and application workloads If 32 GHz all core turbo is acceptable and you plan to utilize all or most of the cores on the C48xlarge instance the n change the Cstate to C1 Pstates start at P0 where Turbo mode is enabled and go to P15 which represents the lowest possible frequency P0 provides the maximum baseline frequency The default Pstate for all C4 instance sizes is P0 There is really no reason for changing this for gaming workloads Turbo Boost mode is the desirable state The following table describes the C and Pstates for the c44xlarge and c48xlarge Instance size Default Max C State Recommended setting Default PState Recommended setting c44xlarge and smaller 1 1 0 0 c48xlarge 6a 1 0 0 a) Running cat /sys/module/intel_idle/parameters/max_cstate will show the max Cstate as 9 It is actually set to 6 which is the maximum possible value Use turbostat to see the Cstate and max turbo frequency that can be achieved on the c48xlarge instance Again these instructions were tested using the Amazon Linux AMI and only work on the c48xlarge instance but not on any of the other instance sizes in the C4 family First run the following turbostat command to install stress on your system (If turbostat is not installed on your system then install that too) sudo yum install stress The following command stress es two cores (ie two hyperthreads of two different physical cores): sudo turbostat debug stress c 2 t 60 ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 18 Here is a truncated printout of the results of running the command: Definitions: AVG_MHz: number of cycles executed divided by time elapsed %Busy: percent of time in "C0" state Bzy_MHz: average clock rate while the CPU was busy (in "c0" state) TSC_MHz: average MHz that the TSC ran during the entire interval The output shows that vCPUs 9 and 20 spent most of the time in the C0 state (%Busy) and hit close to the maximum turbo of 35 GHz (Bzy_MHz) vCPUs 2 and 27 the other hyperthreads of these cores are sitting in C1 C state (CPU% c1) waiting for instructions A frequency close to 35 GHz was achievable because the default Cstate on the c48xlarge instance was C6 and so most of the cores were in the C6 state (CPU%c6) ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 19 Next try stressing all 36 vCPUs to see the 32 GHz All Core Turbo: sudo turbostat debug stress c 36 t 60 Here is a truncated printout of the results of running the command: You can see that all of the vCPUs are in C0 for over 99% of the time (%Busy) and that they are all hitting 32 GHz (Bzy_MHz) when in C0 To set the CState to C1 edit the /boot/grub/grubconf file with your editor of choice For the first entry that begins with “kernel” (there may be more than one kernel entry you only need to edit the first one) add intel_idlemax_cstate=1 at the end of the line to set C1 as the deepest C state for idle cores: ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 20 # created by imagebuilder default=0 timeout=1 hiddenmenu title Amazon Linux 201409 (31426 2446amzn1x86_64) root (hd00) kernel /boot/vmlinuz 31426 2446amzn1x86_64 root=LABEL=/ console=ttyS0 intel_idlemax_cstate=1 initrd /boot/initramfs 31426 2446amzn1x86_64img Save the file and exit your editor Reboot your instance to enable the new kernel option Now rerun the turbostat command to see what changed after setting the Cstate to C1: sudo turbostat debug stress c 2 t 10 Here is a truncated printout of the results of running the command: ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 21 The output in the table above shows that all of the cores are now at a Cstate of C1 The maximum average frequency of the two vCPUs that were stressed vCPUs 16 and 2 in the example above is 32 GHz (Bzy_MHz) The maximum turbo of 35 GHz is no longer available since all of the vCPUs are at C1 Another way to verify that the Cstate is set to C1 is to run the following command: cat /sys/module/intel_idle/parameters/max_cstate Finally you may be wondering what the performance cost is when a core switches from C6 to C1 You can query the cpuidle file to show the exit latency in microseconds for various Cstates There is a latency penalty each time the CPU transitions between Cstates ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 22 In the default Cstate cpuidle shows that to move from C6 to C0 requires 133 microseconds: $ find /sys/devices/system/cpu/cpu0/cpuidle name latency o name name | xargs cat POLL 0 C1HSW 2 C1EHSW 10 C3HSW 33 C6HSW 133 After you change the Cstate default to C1 you can see the difference in CPU idle Now we see that to move from C1 to C0 takes only 2 microseconds We have cut the latency by 131 microseconds by setting the vCPUs to C1 $ find /sys/devices/system/cpu/cpu0/cpuidle name latency o name name | xargs cat POLL 0 C1HSW 2 The instructions above are only relevant for the c48xlarge instance For the c44xlarge instance (and smaller instance sizes in the C4 family) the Cstate is already at C1 and all core turbo 32 GHz is available by default Turbostat will not show that the processors are exceeding the base of 29 GHz One problem is that even when using the debug option for turbostat the c44xlarge instance does not show the Avg_MHz or the Bzy_MHz values like in the output shown above for the c48xlarge instance One way to verify that the vCPUs on the c44xlarge instance are hitting the 32 GHz all core turbo is to use the showboost script from Brendan Gregg20 ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 23 For this to work on Amazon Linux you need to install the msr tools To do this run these commands: sudo yum groupin stall "Development Tools" wget https://launchpadnet/ubuntu/+archive/primary/+files/msr tools_13origtargz tar –zxvf msr tools_13origtargz sudo make sudo make install cd msrtools_13 wget https://rawgithubusercontentcom/brendangregg/msr cloud tools/master/showboost chmod +x showboost sudo /showboost The output only shows vCPU 0 but you can modify the options section to change the vCPU that will be displayed To show the CPU frequency run your game server or use turbostat stress and then run the showboost command to view the frequency for a vCPU Irqbalance Irqbalance is a service that distributes interrupts over the cores in the system to improve performance Irqbalance is recommended for most use cases except where you are pinning game servers to specific vCPUs or cores In that case disabling irqbalance may make sense Please test this with your specific workloads to see if there is a difference By default irqbalance is running on the C4 instance family To check if irqbalance is running on your instance run the following command: sudo service irqbalance status Irqbalance can be configured in the /etc/sysconfig/irqbalance file You want to see a fairly even distribution of interrupts across all the vCPUs You can view the status of interrupts to see if they are properly being distributed across vCPUs by running the following command: ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 24 cat /proc/interrupts Hyperthreading Each vCPU on the C4 instance family is a hyperthread of a physical core Hyperthreading can be disabled if you determine that this has a detrimental impact on the performance of your application However many gaming customers do not find a need to disable hyperthreading The table below shows the number of physical cores in each C4 instance size Instance Name vCPU Count Physical Core Count c4large 2 1 c4xlarge 4 2 c42xlarge 8 4 c44xlarge 16 8 c48xlarge 36 18 All of the vCPUs can be viewed by running the following: cat /proc/cpuinfo To get more specific output you can use the following: egrep '(processor|model name|cpu MHz|physical id|siblings|core id|cpu cores)' /proc/cpuinfo In this output the “processor” is the vCPU number The “physical id” shows the processor socket ID For any C4 instance other than the c48xlarge this will be 0 The “core id” is the physical core number Each entry that has the same “physical id” and “core id” will be hyperthreads of the same core Another way to view the vCPUs pairs (ie hyperthreads) of each core is to look at the thread_siblings_list for each core This will show two numbers that are ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 25 the vCPUs for each core Change the X in “cpuX” to the vCPU number that you want to view cat /sys/devices/system/cpu/cpu X/topology/thread_siblings_list To disable hyperthreading edit the /boot/grub/grubconf file with your editor of choice For the first entry that begins with “kernel” (there may be more than one kernel entry you only need to edit the first one) add maxcpus=NUMBER at the end of the line where NUMBER is the number of actual cores in the C4 instance size you are using Refer to the table above on the number of physical cores in each C4 instance size # created by imagebuilder default=0 timeout=1 hiddenmenu title Amazon Linux 201409 (31426 2446amzn1x86_64) root (hd00) kernel /boot/vmlinuz 31426 2446amzn1x86_64 root=LABEL=/ console=ttyS0 maxcpus=18 initrd /boot/initramfs 31426 2446amzn1x86_64img Save the file and exit your editor Reboot your instance to enable the new kernel option Again this is one of those settings that you should test to determine if it provides a performance boost for your game This setting would likely need to be combined with CPU pinning before it would provide any performance boost In fact disabling hyperthreading without using pinning may degrade performance Many major AAA games running on AWS do not actually disable hyperthreading If there is no performance boost you can avoid this setting to avoid the administrative overhead of having to maintain this on each of your game servers CPU Pinning Many of the game server processes we see usually have a main thread and then a few ancillary threads Pinning the process for each game server to a core ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 26 (either a vCPU or physical core) is definitely an option but not a configuration we often see Usually pinning is done in situations where the game engine truly needs exclusive access to a core Often game companies simply allow the Linux scheduler to handle this Again this is something that should be tested but if the performance is sufficient without pinning it can save you administrative overhead to not have to worry about pinning As will be discussed in the NUMA section you can pin a process to both a CPU core and a NUMA node by running the following command (replacing the values for $phys_cpu_core and $associated_numa_node in addition to the game_server_executable name ): “numactl – physcpubind $phys_cpu_core –membind $associated_numa_node /game_server_executable ” Linux Scheduler The default Linux scheduler is called the Completely Fair Scheduler (CFS) 21 and it is responsible for executing processes by taking care of the allocation of CPU resources The primary goal of CFS is to maximize utilization of the vCPUs and in turn provide the best overall performance If you don’t pin game server processes to a vCPU then the Linux scheduler assigns threads for these processes There are a few parameters for tuning the Linux scheduler that can help with game servers The primary goal of the three parameters documented below is to keep tasks on processors as long as reasonable given the activity of the task We focus on the scheduler minimum granularity the scheduler wakeup granularity and the scheduler migration cost values To view the default value of all of the kernelsched options run the following command: sudo sysctl A | grep v "kernelsched _domain" | grep "kernelsched" The scheduler minimum granularity value configures the time a task is guaranteed to run on a CPU before being replaced by another task By default ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 27 this is set to 3 ms on the C4 instance family when running Amazon Linux This value can be increased to keep tasks on the processors longer An option would be to double this setting this to 6 ms Like all other performance recommendations in this whitepaper these settings should be tested thoroughly with your game server This and the other two scheduler commands do not persist the setting across reboots so it needs to be done in a startup script: sudo sysctl w 'kernelsched_min_granularity_ns= New_Value The scheduler wakeup granularity value affects the ability of tasks being woken to replace the current task running The lower the value the easier it will be for the task to force removal By default this is set to 4 ms on the C4 instance family when running Amazon Linux You have the option of halving this value to 2 ms and testing the result Further reductions may also improve the performance of your game server sudo sysctl w 'kernelsched_ wakeup_granularity_ns= New_Value ' The scheduler migration cost value sets the duration of time after a task ’s last execution where the task is still considered “cache hot” when the scheduler makes migration decisions Tasks that are “cache hot” are less likely to be migrated which helps reduce the possibility the task will be migrated By default this is set to 4 ms on the C4 instance family when running Amazon Linux You have the option to double this value to 8 ms and test sudo sysctril – w 'kernelsched_migration_cost_ns= New_Value ' Memory It is important that any customers running game servers on the c48xlarge instance pay close attention to the NUMA information Performance Tuning Option Summary Notes Links or Commands NUMA On the c48xlarge NUMA can become None of the C4 instance size s There are three options to deal with NUMA: CPU ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 28 Performance Tuning Option Summary Notes Links or Commands an issue since there are two NUMA nodes smaller than the c48xlarge will have NUMA issues since they all have one NUMA node pinning NUMA balancing and the numad process Virtual Memory A few virtual memory tweaks can provide a performance boost for some game servers Add the following to /etc/sysctlconf: vmswappiness = New_Value (Recommend start by halving the current value set for your system) Add the following to /etc/sysctlconf: vmdirty_ratio = New_Value (Recommend going with the default value of 20 on Amazon Linux) Add the following to /etc/sysctlconf: vmdirty_background_ratio = New_Value (Recommend going with the default value of 10 on Amazon Linux) NUMA All of the current generation EC2 instances support NUMA NUMA is a memory architecture used in multiprocessing systems that allows threads to access both the local memory memory local to other processors or a shared memory platform The key concern here is that the remote memory usage provides much slower access than the local memory There is a performance penalty when a thread access es remote memory and there are issues with interconnect contention For an application that is not able to take advantage of NUMA you want to ensure that the processor only uses the local memory as much as possible This is only an issue for the c48xlarge instance because you have access to two processor sockets that each represent a separate NUMA node NUMA is not a concern on the smaller instances in the C4 family since you are limited to a ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 29 single NUMA node In addition the NUMA topology will remain fixed for the lifetime of an instance The c48xlarge instance has two NUMA nodes To view details on these nodes and the vCPUs that are associated with each node run the following command: numactl hardware To view the NUMA policy settings run: numactl show You can also view this information in the following directory (just look in each of the NUMA node directories): /sys/devices/system/node Use the numastat tool to view perNUMAnode memory statistics for processes and the operating system The –p option allows you to view this for a single process while the –v option provides more verbose data numastat p process_name numastat – v CPU Pinning There are three recommended options to address potential NUMA performance issues The first is to use CPU pinning the second is automatic NUMA balancing and the last is to use numad These options should be tested to determine which provides the best performance for your game server First we will look at CPU pinning This involves binding the game server process both to a vCPU (or core) and to a NUMA node You can use numactl to do this Change the values for $phys_cpu_core and $associated_numa_node in addition to the game_server_executable name in the following command for ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 30 each game server running on the instance See the numactl man page for additional options22 numactl physcpubind= $phys_cpu_core membind=$associated_numa_node game_server _executable Automatic NUMA Balancing The next option is to use automatic NUMA balancing This feature attempts to keep the threads or processes in the processor socket where the memory that they are using is located It also tries to move application data to the processor socket for the tasks accessing it As of Amazon Linux Ami 201603 automatic NUMA balancing is disabled by default23 To check if automatic NUMA balancing is enabled on your instance run the following command: cat /proc/sys/kernel/numa_balancing To permanently enable or disable NUMA balancing set the Value parameter to 0 to disable or 1 to enable and run the following command: sudo sysctl w 'kernelnuma_balancing=Value ' echo 'kernelnuma_balancing = Value ' | sudo tee /etc/sysctld/50 numabalancingconf Again these instructions are for Amazon Linux Some distributions may set this in the /etc/sysctlconf file Numad Numad is the final option to look at Numad is a daemon that monitors the NUMA topology and works to keep processes on the NUMA node for the core It is able to adjust to changes in the system conditions The article Mysteries of NUMA Memory Management Revealed explains the performance differences between automatic NUMA balancing and numad24 ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 31 To use numad you need to disable automatic NUMA balancing first To install numad on Amazon Linux visit the Fedora numad site and then download the most recent stable commit25 From the numad directory run the following commands to install numad: sudo yum groupinstall "Development Tools" wget https://gitfedorahostedorg/cgit/numadgit/snapshot/numad 05targz tar –zxvf numad 05targz cd numad 05 make sudo make install The logs for numad can be found in /var/log/numadlog and there is a configuration file in /etc/numadconf There are a number of ways to run numad The numad –u option sets the maximum usage percentage of a node The default is 85% The recommended setting covered in the Mysteries of NUMA article is –u100 so this setting would configure the maximum to 100% This forces processes to stay on the local NUMA node up to 100% of their memory requirement sudo numad –u100 Numad can be terminated by using the following command: sudo /usr/bin/nu mad –i0 Finally disabling NUMA completely is not a good choice because you will still have the problem with remote memory access so it is better to work with the NUMA topology F or the c48xlarge instance we recommend taking some action for most game servers We recommend testing the available options that we discussed to determine which provides the best performance While none of these options may eliminate memory calls to the remote NUMA node for a process they each should provide a better experience for your game server ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 32 You can test how well an option is doing by running your game servers on the instance and using the following command to see if there are any numa_foreign (ie memory allocated to the other NUMA node but meant for this node) and numa_miss (ie memory allocated to this node but meant for the other NUMA node) entries: numastat v A more general way to test for NUMA issues is to use a tool like stress and then run numastat to see if there are foreign/miss entries: stress vm bytes $(awk '/MemFree/{printf "%d \n" $2 * 0097;}' < /proc/meminfo)k vmkeep m 10 Virtual Memory There are also a few virtual memory tweaks that we see customers use that may provide a performance boost Again these should be tested thoroughly to determine if they improve the performance of your game VM Swappiness VM Swappiness controls how the system favors anonymous memory or the page cache Low values reduce the occurrence of swapping processes out of memory which can decrease latency but reduce I/O performance Possible values are 0 to 100 The default value on Amazon Linux is 60 The recommendation is to start by halving that value and then testing Further reductions in the value may also help your game server performance To view the current value run the following command: cat /proc/sys/vm/swappiness To configure this parameter to persist across reboots add the following with the new value to the /etc/sysctlconf file: ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 33 vmswappiness = New_Value VM Dirty Ratio VM Dirty Ratio forces a process to block and write out dirty pages to disk when a certain percentage of the system memory becomes dirty The possible values are 0 to 100 The default on Amazon Linux is 20 and is the recommended value To view the current value run the following command: cat /proc/sys/vm/ dirty_ratio To configure this parameter to persist across reboots add the following with the new value to the /etc/sysctlconf file: vmdirty_ratio = New_Value VM Dirty Background Ratio VM Dirty Background Ratio forces the system to start writing data to disk when a certain percentage of the system memory becomes dirty Possible values are 0 to 100 The default value on Amazon Linux is 10 and is the recommended value To view the current value run the following command: cat /proc/sys/vm/dirty_background_ratio To configure this parameter to persist across reboots add the following with the recommended value to the /etc/sysctlconf file: dirty_background_ratio= 10 ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 34 Disk Performance tuning for disk is the least critical because disk is rarely a bottleneck for multiplayer game servers We have not seen customers experience any disk performance issues on the C4 instance family The C4 instance family only uses Amazon Elastic Block Store (EBS) for storage with no local instance storage; so C4 instances are EBSoptimized by default26 Amazon EBS can provide up to 48000 IOPS if needed You can take standard disk performance steps such as using a separate boot and OS/game EBS volume Performance Tuning Option Summary Notes Links or Commands EBS Performance C4 instances are EBSoptimized by default IOPS can be configured to fit the requirements of the game server NA Benchmarking and Testing Benchmarking There are many ways to benchmark Linux One option you may find useful is the Phoronix Test Suite 27 This open source Pythonbased suite provides a large number of benchmarking (and testing) options You can run tests against existing benchmarks to compare results after successive tests You can upload the results to OpenBenchmarkingorg for online viewing and comparison28 There are many benchmarks available and most can be found on the OpenBenchmarkingorg tests site 29 Some tests that can be useful for benchmarking in preparation for a game server are the cpu30 multicore 31 processor 32 and universe tests33 These tests usually involve multiple subtests Be aware that some of the subtests available may not be available for download or may not run properly To get started you need to install the prerequisites first: sudo yum groupi nstall "Development Tools" y sudo yum install php cli php xml –y ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 35 sudo yum install {libaiopcrepopt} devel glibc {develstatic} y Next download and install Phoronix: wget https://githubcom/phoronix testsuite/phoronix test suite/archive/masterzip unzip masterzip cd phoronix testsuitemaster /install sh ~/directory ofyourchoice/phoronix tester To install a test run the following from the bin subdirectory of the directory you specified when you ran the installsh command: phoronix testsuite install <test or suite name> To install and run a test use: phoronix testsuite benchmar k <test or suite name> You can choose to have the results uploaded to Openbenchmarkorg This option will be displayed at the beginning of the test If you choose “yes” you can name the test run At the end a URL will be provided to view all the test results Once the results are uploaded you can rerun a benchmark using the benchmark result number of previous tests so the results are displayed sidebyside with previous results You can repeat this process to display the results of many tests together Usually you would want to make small changes and the rerun the benchmark You can also choose not to upload the test results and instead view them in the command line output phoronix testsuite benchmark TEST RESULTNUMBER ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 36 The screenshot below shows an example of the output displayed on OpenBenchmarkingorg for a set of multicore benchmark tests run on the c48xlarge instance: CPU Performance Analysis One of the best tools for CPU performance analysis or profiling is the Linux perf command 34 Using this command you can record and then analyze performance data using perf record and perf report respectively Performance analysis is beyond the scope of this whitepaper but a couple of great resources are the kernelorg wiki and Brendan Gregg ’s perf resources 35 The next section describes how to produce flame graphs using perf to analyze CPU usage Visual CPU Profiling A common issue that comes up during game server testing is that while multiple game servers are running (often unpinned to vCPUs) one vCPU will hit near 100% utilization while the other vCPUs will show low utilization Troubleshooting this type of performance problem and other similar CPU issues can be a complex and timeconsuming process The process basically involves looking at the function running on the CPUs and finding the code paths that are the most CPU heavy Brendan Gregg’s flame graphs allow you to visually analyze and troubleshoot potential CPU performance issues36 Flame graphs ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 37 allow you to quickly and easily identify the functions used most frequentl y during the window visualized There are multiple types of flame graphs including graphs for memory leaks but we will focus on CPU flame graphs 37 We will use the perf command to generate the underlying data and then the flame graphs to create the visualization First install the prerequisites: # Install Perf sudo yum install perf # Remove the need to use root for running perf record sudo sh c 'echo 0 >/proc/sys/kernel/perf_event_paranoid' # Download Flame graph wget https://githubcom/brendangregg/FlameGraph/archive/masterzip # Finally you need to unzip the file that was dow nloaded This will create a directory called FlameGraph master where the flame graph executables are located unzip masterzip To see interesting data in the flame graph you either need to run your game server or a CPU stress tool Once that is running you run a perf profile recording You can run the perf record against all vCPUs against specific vCPUs or against particular PIDs Here is a table of the various options: Option Notes F Frequency for the perf record 99 Hz is usually sufficient for most use cases g Used to capture stack traces (as opposed to on CPU function or instructions) C Used to specify the vCPUs to trace a Used to specify that all vCPUs should be traced sleep Specified the number of seconds for the perf record to run ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 38 The following are the common commands for running a perf record for a flame graph depending on whether you are looking at all the vCPUs or just one Run these commands from the FlameGraphmaster directory: # Run perf record on all vCPUs perf record F 99 a g sleep 60 # Run perf record on specific vCPUs specified by number after the –C option perf record F 99 C CPU_NUMBER g sleep 60 When the perf record is complete run the following commands to produce the flame graph: # Create perf file When you run this you will get an error about “no symb ols found” This can be ignored since we are generating this for flame graphs perf script > outperf # Use the stackcollapse program to fold stack samples into single lines /stackcollapse perfpl outperf > outfolded # Use flamegraphpl to render a SVG /flamegraphpl outfolded > kernelsvg Finally use a tool like WinSCP to copy the SVG file to your desktop so you can view it Below are two examples of flame graphs The first was produced on a c48xlarge instance for 60 seconds while sysbench was running using the following options (for each in 1 2 4 8 16; do sysbench test=cpu cpumaxprime=20000 num threads=$each run; done) You can see how little of the total CPU processing on the instance was actually devoted to sysbench You can hover over various elements of the flame graphs to get additional details about the number of samples and percentage spent for each area ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 39 The second graph was produced on the same c48xlarge instance for 60 seconds while running the following script: (fulload() { dd if=/dev/zero of=/dev/null |dd if=/dev/zero of=/dev/null |dd if=/dev/zero of=/dev/null |dd if=/dev/zero of=/dev/null |dd if=/dev/zero of=/dev/null | dd if=/dev/zero of=/dev/null | dd if=/dev/ze ro of=/dev/null | dd if=/dev/zero of=/dev/null | dd if=/dev/zero of=/dev/null | dd if=/dev/zero of=/dev/null | dd if=/dev/zero of=/dev/null | dd if=/dev/zero of=/dev/null & }; fulload; read; killall dd) The output presents a more interesting set of actions taking place under the hood: Conclusion The purpose of this whitepaper is to show you how to tune your EC2 instances to optimally run game servers on AWS It focuses on performance optimization of the network CPU and memory on the C4 instance family when running game servers on Linux Disk performance is a smaller concern because disk is rarely a bottleneck when it comes to running game servers This whitepaper is meant to be a central compendium of information on the compute instances to help you run your game servers on AWS We hope this guide saves you a lot of time by calling out key information performance ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 40 recommendations and caveats to get up and running quickly using AWS in order to make your game launch as successful as possible Contributors The following individuals and organizations contributed to this document:  Greg McConnel Solutions Architect Amazon Web Services  Todd Scott Solutions Architect Amazon Web Services  Dhruv Thukral Solutions Architect Amazon Web Services ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 41 1 https://awsamazoncom/ec2/ 2 http://docsawsamazoncom/AWSEC2/latest/UserGuide/c4instanceshtml 3 https://enwikipediaorg/wiki/Advanced_Vector_Extensions 4 https://awsamazoncom/vpc/ 5 http://docsawsamazoncom/AWSEC2/latest/UserGuide/AMIshtml 6 https://awsamazoncom/aboutaws/globalinfrastructure/ 7 https://awsamazoncom/ec2/faqs/#Enhanced_Networking 8 https://enwikipediaorg/wiki/Singleroot_input/output_virtualization 9 http://docsawsamazoncom/AWSEC2/latest/UserGuide/enhanced networkinghtml 10 http://docsawsamazoncom/AWSEC2/latest/WindowsGuide/enhanced networkingwindowshtml 11 https://downloadcenterintelcom/download/18700/NetworkAdapter VirtualFunctionDriverfor10GigabitNetworkConnections 12 https://wwwkernelorg/doc/Documentation/networking/scalingtxt 13 http://docsawsamazoncom/AWSEC2/latest/UserGuide/usingenihtml 14 http://docsawsamazoncom/AWSEC2/latest/UserGuide/placement groupshtml 15 https://awsamazoncom/premiumsupport/knowledgecenter/network throughputbenchmarklinuxec2/ 16 https://awsamazoncom/premiumsupport/knowledgecenter/network throughputbenchmarkwindows ec2/ 17 https://wwwkernelorg/doc/Documentation/timers/timekeepingtxt 18 https://xenbitsxenorg/docs/43testing/misc/tscmodetxt 19 http://docsawsamazoncom/AWSEC2/latest/UserGuide/processor_state_co ntrolhtml 20 https://rawgithubusercontentcom/brendangregg/msrcloud tools/master/showboost 21 https://enwikipediaorg/wiki/Completely_Fair_Scheduler Notes ArchivedAmazon Web Services – Optimizing Multiplayer Game Server Performance on AWS Page 42 22 http://linuxdienet/man/8/numactl 23 https://awsamazoncom/amazonlinuxami/201603releasenotes/ 24 http://rhelblogredhatcom/2015/01/12/mysteries ofnumamemory managementrevealed/#more599 25 https://gitfedorahostedorg/git/numadgit 26 https://awsamazoncom/ebs/ 27 http://wwwphoronixtestsuitecom/ 28 http://openbenchmarkingorg/ 29 http://openbenchmarkingorg/tests/pts 30 http://openbenchmarkingorg/suite/pts/cpu 31 http://openbenchmarkingorg/suite/pts/multicore 32 http://openbenchmarkingorg/suite/pts/processor 33 http://openbenchmarkingorg/suite/pts/universe 34 https://perfwikikernelorg/indexphp/Main_Page 35 http://wwwbrendangreggcom/perfhtml 36 http://wwwbrendangreggcom/flamegraphshtml 37 http://wwwbrendangreggcom/FlameGraphs/cpuflamegraphshtml Archived
General
Optimizing_Enterprise_Economics_with_Serverless_Architectures
This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures September 2021 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2021 Amazon Web Services Inc or its affiliates All right s reserved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlContents Introduction 1 Understanding Serverless Architectures 2 Is Serverless Always Appropriate? 2 Serverless Use Cases 3 AWS Serverless Capabilities 6 Service Offerings 6 Developer Support 9 Security 11 Partners 12 Case Studies 13 Serverless Websites Web Apps and Mobile Backends 13 IoT Backends 14 Data Processing 15 Big Data 16 IT Automation 17 Machine Learning 17 Conclusion 18 Contributors 19 Further Reading 19 Reference Architectures 19 Document Revisions 20 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlAbstract This whitepaper is intended to help Chief Information Officers ( CIOs ) Chief Technology Officers ( CTOs ) and senior architects gain insight into serverless architectures and their impact on time to market team agility and IT economics By eliminating idle underutilized servers at the design level and dramatically simplifying cloud based software designs serverless approaches rapidly change the IT landscape This whitepaper covers the basics of serverless approaches and the AWS serverless portfolio It includes several case studies illustrating how existing companies are gaining significant agility and ec onomic benefits from adopting serverless strategi es In addition it describ es how organizations of all sizes can use serverless architectures to architect reactive event based systems and quickly deliver cloud native microservices at a fraction of conventional costs This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 1 Introduction Many companies are already gaining benefits from running applications in the public cloud including cost savings from pay asyougo billing and improved agility through the use of on demand IT r esources Multiple studies across application types and industries have demonstrated that migrating existing application architectures to the cloud lowers the T otal Cost of Ownership (TCO) and improves time to market 1 Relative to on premises and private cloud solutions the public cloud makes it significantly simpler to build deploy and manage fleets of servers and the applications that run on them The public cloud has established itself as the new normal with double digit year overyear growth since its inception2 However companies today have options beyond classic server or virtual machine (VM) based architectures to take advantage of the public cloud Although the cloud eliminates the need for companies to purchase and maintain their hardware any server based architecture still requires them to architect for scalability and reliability Plus companies need to own the challenges of patching and deploying to those server fleets as their applications evolve Moreover they must scale their server f leets to account for peak load and then attempt to scale them down when and where possible to lower costs —all while protecting the experience of end users and the integrity of internal systems Idle underutilized servers prove to be costly and wasteful R esearchers calculated the average server utilization to be around only 18 percent for enterprises3 Using serverless services developers and architects can design and develop complex application architectures focusing just on business logic without deali ng with the complexity of servers As a result product owners can achieve faster time to market with shorter development deployment and testing cycles In addition the r eduction of server management overheads reduces the TCO which ultimately results in competitive advantages for the companies With significan tly reduced infrastructure costs more agile and focused teams and faster time to market companies that have already adopted serverless approaches are gaining a key adv antage over their competitors This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 2 Understanding Serverless Architectures The advantages of the serverless approaches cited above are appealing but what are the considerations for practical implementation ? What separates a serverless application from its conv entional server based counterpart? Serverless uses managed services where the cloud provider handles infrastructure management tasks like capacity provisioning and patching This allows your workforce to focus on business logic serving your customers while minimiz ing infrastructure management configuration operations and idle capacity In addition Serverless is a way to describe the services practices and strategies that enable you to build more agile applications so you can innovate and respond to ch ange faster Serverless applications are designed to run whole or parts of the application in the public cloud using serverless services AWS offers many serverless services in domains like compute storage application integration orchestration and datab ases The serverless model provide s the following advantages compared to conventional server based design: •There is no need to provision manage and monitor the underlying infrastructure All of the actual hardware and platform server software packages are managed by the cloud provider You need to just deploy your application and its configuration •Serverless services have fault tolerance built in by default Serverless applications require minimal configuration and management from the user to achieve high availability •Reduced operatio nal overhead allows your teams to release quickly get feedback and iterate to get to market faster •With a pay forvalue billing model you do not pay for over provisioning and your resource utilization is optimized on your behalf •Serverless applications have built in service integrations so you can focus on building your application instead of configuring it Is Serverless Always Appropriate? Almost all modern application s can be modified to run successfully and in most cases in a more economical and scalable fashion on a serverless platform This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 3 The choice between serverless and the alternatives do not need to be an all or nothing proposition Individual components could b e run on servers using containers or using serverless architectures within an application stack However here are a few scenarios when serverless may not be the best choice: •When the goal is explicitly to avoid making any changes to existing application architecture •For the code to run correctly fine grained control over the environment is required such as specifying particular operating system patches or accessing low level networking operations •Applications with ultra low latency requirements for all incoming requests •When an on premises application hasn’t been migrated to the public cloud •When certain aspects of the application component don’t fit within the limits of the serverless services for example if a function takes more time to execute than the AWS Lambda function ’s execution timeout limit or the backend application takes more time to run than Amazon API Gateway’s timeout Serverless Use Cases The serverless application model is generic and applies to almost any application from a st artup’s web app to a Fortune 100 company’s stock trade analysis platform Here are several examples: •Data processing – Developers have discovered that it’s much easier to parallelize with a serverless approach4 main ly when triggered through events leadin g them to increasingly apply serverless techniques to a wide range of big data problems without the need for infrastructure management Th ese include map reduce problems high speed video transcoding stock trade analysis and compute intensive Monte Carlo simulations for loan applications •Web applications – Eliminating servers makes it possible to create web applications that cost almost nothing when there is no traffic while simultaneously scaling to handle peak loads even unexpected onesThis version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 4 •Batch process ing – Serverless architectures can be used in a run multi step flow chart like use cases like ETL jobs •IT automation – Serverless functions can be attached to alarms and monitors to provide customization when required Cron jobs (used to schedule and auto mate tasks that need to be carried out periodically) and other IT infrastructure requirements are made substantially simpler to implement by removing the need to own and maintain servers for their use especially when these jobs and condition s are infreque nt or variable in nature •Mobile backends – Serverless mobile backends offer a way for developers who focus on client development to quick ly create secure highly available and perfectly scaled backends without becoming experts in distributed systems desi gn •Media and log processing – Serverless approaches offer natural parallelism making it simpler to process compute heavy workloads without the complexity of building multithreaded systems or manually scaling compute fleets •IoT backends – The ability to bring any code including native libraries simplifies the process of creating cloud based systems that can implement device specific algorithms •Chatbots (including voice enabled assistants) and other webhook based systems – Serverless approaches are perfect for any webhook based system like a chatbot In addition t heir ability to perform actions (like running code) only when needed (such as when a user requests information from a chatbot) makes them a straightforward and typically lower cost approach fo r these architectures For example the majority of Alexa Skills for Amazon Echo are implemented using AWS Lambda •Clickstream and other near real time streaming data processes – Serverless solutions offer the flexibility to scale up and down with the flow of data enabling them to match throughput requirements without the complexity of building a scalable compute system for each application For example w hen paired with technology like Amazon Kinesis AWS Lambda can offer high speed records processing for clickstream analysis NoSQL data triggers stock trade information and moreThis version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 5 • Machine learning inference – Machine learning models can be hosted on serverless functions to support inference requests eliminating the need for owning or maintaining servers for supporting intermittent inference requests • Content delivery at the edge –By moving serverless event s handing to the edge of the internet developers can take advantage of lower latency and customize retrievals and content fetches quick ly enabling a new spectrum of use cases that are latency optimized based on the client’s location • IoT at the edge – Enabling serverless capabilities such as AWS Lambda functions to run inside commercial residential and hand held Internet of Things (IoT) devices e nables these devices to respond to events in near realtime Devices can take actions such as aggregat ing and filtering data locally perform ing machine learning inference or sending alerts Typically serverless applications are built using a microservices architecture in which an application is separated into independent components that perform discrete jobs These components made up of a compute layer a nd APIs message queues database and other components can be independently deployed tes ted and scaled The ability to scale individual components needing additional capacity rather than entire application s can save substantial infrastructure costs It would allow an application to run lean with minimal idle server capacity without the need for rightsizing activities 5 Serverless applications are a natural fit for microservices because of their decoupled nature Organizations can become more agile by avoiding monolithic designs and architectures because developers can deploy incrementally and replace or upgrade individual components such as the database tier if needed In many cases not all layers of the architecture need to be moved to serverle ss services to reap its benefits For instance simply isolating the business logic of an application to deploy onto the AWS Lambda serverless compute service is all that’s required to reduce server management tasks idle compute capacity and operational overhead immediately Serverless architecture also has significant economic advantages over server based architectures when considering disaster recovery scenarios This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 6 For most serverless architectures the price for managing a disaster recovery site is ne ar zero even for warm or hot sites Serverless architectures only incur a charge when traffic is present and resources are being consumed Storage cost is one exception as many applications require readily accessible data Nonetheless serverless archit ectures truly shine when planning disaster recovery sites especially when compared to traditional data centers Running a disaster recovery on premises often doubles infrastructure costs as many servers are idle waiting for disaster to happen Serverless disaster recovery sites can be set up quick ly as well Once serverless architectures have been defined with infrastructure as code using AWS native services such as AWS CloudFormation an entire architecture can be duplicated in a separate region by runni ng a few commands AWS Serverless Capabilities Like any other traditional server and VM based architecture serverless provides core capabilities such as compute storage messaging and more to its users However serverless services are distributed acros s multiple managed services rather than sprea d across software installed virtual machines As a result AWS provides a complete serverless application that require s a broad array of services tools and capabilities spanning storage messaging diagnostics and more Each of these services is available in the developer’s toolbox to build a practical application Service Offerings Since the introduction of Lambda in 2014 AWS has introduced a wide variety of fullymanaged serverless services that enable organizations to create serverless apps that can integrate seamlessly with other AWS services and thirdparty services The launched serverless services include but are not limited to Amazon API Gateway (2015) Am azon EventBridge (2019) and Amazon Aurora Serverless v2 (2020) The pace of innovation has not stopped for individual services as Lambda has had more than 100 major feature releases since its launch 6 Figure 1 illustrates a subset of the components in the AWS serverless platform and their relationships This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 7 Figure 1: AWS serverless platform components AWS’ s serverless offering consists of services that span across all infrastr ucture layers including compute storage and orchestration In addition AWS provides tools needed to author build deploy and diagnose serverless architectures Running a serverless application in production requires a reliable flexible and trustwo rthy platform that can handle the demands of small startups to global worldwide corporations The platform must scale all of an application’s elements and provide end toend reliability Just as with conventional applications helping developers create a nd deliver serverless solutions is a multi dimensional challenge To meet the needs of large scale enterprises across various industries the AWS serverless platform offers the following capabilities through a diverse set of services This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 8 • A high performance scalable and reliable serverless compute layer The serverless compute layer is at the core of any serverless architecture such as AWS Lambda or AWS Fargate responsible for running the business logic Because these services are run in response to events simple integration with both first party and third party event sources is essential to making solutions simple to express and enabling them to scale automatically in response to varying workloads In addition serverless architectures eliminate all of the scaling and management code typically required to integrate such systems shifting that operational burden to AWS • Highly available durable and scalable storage layer – AWS offers fully managed storage layers that offload the overhead of ever increasing storage requirements to support the serverless compute layer Instead of manually adding more servers and storage services such as Amazon Aurora Serverless v2 Amazon DynamoDB and Amazon Simple Storage Service (Amazon S3) scal es based on usage and users are only billed for the consumed resources In addition AWS offers purpose built storage services to meet diverse customer needs from DynamoDB for keyvalue storage Amazon S3 for object storage and Aurora Serverless v2 for r elational data storage • Support for loosely coupled and scalable decoupled serverless workloads – As applications mature and grow they become more challenging to maintain or add new features and some transform into monolithic applications As a result they mak e it challenging to implement changes and slow down the development pace What is needed is individual components that are decoupled and can scale independently Amazon Simple Queue Service (Amazon SQS) Amazon Simple Notification Service (Amazon S NS) Amazon EventBridge and Amazon Kinesis enable developers to decouple individual components allowing developers to create and innovate without being dependent on one another In addition these components all being serverless implies that customers are only being billed for the resources that each component is consuming eliminating unnecessary resources being wasted This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 9 • Orchestration offer ing state and workflow management – Orchestration and state management are also critical to a serverless platform’s success As companies adopt serverless architectures there is an increased need to orchestrate complex workflows with decoupled components AWS Step Functions is a visual workflow service that satisfies this need It is used to orchestrate AWS services automate business processes and build serverless applications Step Functions manage failures retries parallelization service integration s and observability so developers can focus on higher value business logic Building applications from individual components that perform a discrete function lets you scale easily and change applications quickly Developers can change and add steps withou t writing code enabling your team to evolve your application and innovate faster • Native service integrations between serverless services mentioned above such as Amazon Simple Queue Service (SQS) Amazon Simple Notification Service (Amazon SNS) and Amaz on EventBridge act as application integration services enabling communication between decoupled components within microservices Another benefit of these services is that minimal code is needed to allow interoperability between them so you can focus on building your application instead of configuring it For instance integration between Amazon API Gateway a fully managed service for hosting APIs to a Lambda function can be done without writing any code and simply walking through the AWS console Deve loper Support Providing the right tool and support for developers and architects is essential to boosting productivity AWS Developer Tools are built to work with AWS making it easier for teams to set up and be productive In addition to popular and well known developer tools such as AWS Command Line Interface (AWS CLI) and AWS Software Development Kits (AWS SDKs) AWS also provides various AWS open source and third party web frameworks that simplify serverless application development and deployment This includes the AWS Serverless Application Model (AWS SAM) and AWS Cloud Development Kit (AWS CDK) that allows customers to onboard faster to serverless architectures offloading undifferentiated heavy lifting of managing the infrastructure for your appli cations This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 10 This enable s developers to focus on writing code that creates value for their customers In addition AWS provides the following support for developers adopting serverless technologies • A collection of fit forpurpose application modeling framew orks – Application modeling frameworks such as the open specification AWS SAM or AWS CDK enable a developer to express the components that make up a serverless app lication and enable the tools and workflows required to build deploy and monitor those app lications Both frameworks work nicely with the AWS SAM Command Line Interface (AWS SAM CLI) making it easy for them to create and manage serverless applications It also allows developers to build test locally and debug serverless applications then deploy them on AWS It can also create secure continuous integration and deployment (CI/CD) pipelines that follow best practices and integrate with AWS ’ native and third party CI/CD systems • A vibrant developer ecosystem that helps developers discover and apply solutions in a variety of domains and for a broad set of third party systems and use cases Thriving on a serverless platform requires that a company be able to get started quick ly including finding ready made templates for everyday use cases whet her they involve firstparty or third party services These integration libraries are essential to convey successful patterns —such as processing streams of records or implementing webhooks —especially when developers are migrating from server based to serverless architectures7 A closely related need is a broad and diverse ecosystem that surrounds the core platform A large vibrant ecosystem helps developers discover and use solutions from the community an d makes it easy to contribute new ideas and approaches Given the variety of toolchains in use for application lifecycle management a healthy ecosystem is also necessary to ensure that every language Integrated Development Environment (IDE) and enterpri se build technology has the runtimes plugins and open source solutions essential to integrate the building and to deploy ment of serverless app lication s into existing approaches Finally a broad ecosystem provides signific ant acceleration across domains and enables developers to repurpose existing code more readily in a serverless architecture This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 11 Security All AWS customers benefit from a data center and network architecture built to satisfy the requirements of our most security sensitive customers This means that you get a resilient infrastructure designed for high security without a traditional data center’s capital outlay and operational overhead Serverless architecture is no exception To accomplish this AWS’ serverless services offer a broad array of security and access controls including support for virtual private networks role based and access based permissions robust integration with API based authentication and access control mechanisms and support for encrypting application elements such as environment variable settings These outofthebox offered features and services can help developers deploy and publish workloads confidently and reduce time to market Serverless systems by their design also provide s an additional level of sec urity and control for the following reasons: • First class fleet management including security patching – For managed serverless services such as Lambda API Gateway and Amazon SQS the servers that host the services are constantly monitored cycled and s ecurity scanned As a result t hey can be patched within hours of essential security update availability instead of many enterprises ’ compute fleets with much looser service level agreements (SLAs ) for patching and updating • Perrequest authentication access control and auditing – Every request between natively integrated services is individually authenticated authorized to access specified resources and can be fully audited Requests arriving from outside of AWS via Amazon API Gateway provide other internet facing defense systems For example AWS Web Application Firewall (AWS WAF) is a web application firewall that integrates natively with Amazon API Gateway It helps protect hosted APIs against common web exploits and bots that may affect availability compromise security or consume excessive resources including distributed denial ofservice (DDoS) attack defenses In addition c ompanies migrating to serverless architectures can use AWS CloudTrail to gain detailed insight into which users are accessing which systems with what privileges Finally t hey can use AWS tools to process the audit records programmatically This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 12 These security features of serverless help eliminate additional costs often overlooked when calculating the TCO of one’s infrastr ucture Such costs include security and monitoring software licenses installed on servers staffing of information security personnel to ensure that all servers are secure as well as costs associated with regulatory compliance and many others Serverless architecture s also have a smaller blast radius compared to monolithic applications running on virtual machines As AWS takes responsibility of the security of the servers behind the scenes customers can focus on implementing least privilege access between the services Once least privilege access is implemented the blast radius is dramatically reduced The decoupled nature of the architecture will limit the impact to a smaller set of services compared to a scenario where a malicious actor gains a ccess to a n internal server Considering the significant financial impact of a security breach this is also a n added benefit that help enterprises optimize on infrastructure costs Adopting serverless architectures help in reducing or eliminating such expense s that are no longer needed and capital can be repurposed and teams are freed to work on higher value activities Partners AWS has an expansive partner network that assists our customers with building solutions and services on AWS AWS works closely with validated AWS Lambda Partners for building serverless architecture s that help customers develop services and applications without provisioning or managing servers Lambda Partners provide developer tooling solutions validated by AWS serverless experts against the AWS Well Architected Framework Customers can simplify their technology evaluation process and increase purchasing confidence knowing these companies’ solutions have passed a strict AWS validation of security performance and reliability Customers can ultimately reduce time to market with the assistance of qualified partners leveraging serverless technologies For a complete list of AWS Lambda Ready Partners visit our AWS Partner Network page 8 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 13 Case Studies Companies have applied serverless architectures to use cases from stock trade validation to e commerce website construction to natural language processing AWS serverless portfolio offer s the flexibility to create a wi de array of applications including those requiring assurance programs such as PCI or HIPAA compliance The following sections illustrate some of the most common use cases but are not a comprehensive list For a complete list of customer references and us e case documentation see Serverless Computing 9 Serverless Websites Web Apps and Mobile Backends Serverless approaches are ideal for applications where the load can vary dynamically Using a serverless approach means no compute costs are incurred when there is no end user traffic while still offering instant sca le to meet high demand such as a flash sale on an e commerce site or a social media mention that drives a sudden wave of traffic Compared to traditional infrastructure approaches it is also often significantly less expensive to develop deliver and op erate a web or mobile backend when architected in a serverless fashion AWS provides the services developers need to construct these applications rapidly : • Amazon Simple Storage Service (Amazon S3) and AWS Amplify offer a simple hosting solution for static content • AWS Lambda in conjunction with Amazon API Gateway provides support for dynamic API requests using functions • Amazon DynamoDB offers a simple storage solution for the session and peruser state • Amazon Cognito provides an easy way to handle end user registration authentication and access control to resources • Developers can use AWS Serverless Application Model (SAM ) to describe the various elements of an application This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 14 • AWS CodeStar can set up a CI/CD toolchain with just a few clicks To learn more see the whitepaper AWS Serverless Multi Tier Architectures which provides a detailed examination of patterns for building serverless web applic ations10 For complete reference architectures see Serverless Reference Architecture for creating a Web Application11 and Serverless Reference Architecture for creating a Mobile Backend12 on GitHub Customer Example – Neiman Marcus A luxury household name Neiman Marcus has a reputation for delivering a first class personalized customer service experience To modernize and enhance that experience the company wanted to develop Connect an omnichannel digital selling application that would empower associates to view rich personalized customer information with the goal of making each customer interaction unforgettable Choos ing a serverless architecture with mobile development solutions on Amazon Web Services (AWS) enabled the development team to launch the app much faster than in the 4 months it had originally planned “Using AWS cloud native and serverless technologies we increased our speed to market by at least 50 percent and were able to accelerate the launch of Connect” says Sriram Vaidyanathan senior director of omni engineering at Neiman Marcus This approach also greatly reduced app building costs and provided dev elopers with more agility for the development and rapid deployment of updates The app elastically scales to support traffic at any volume for greater cost efficiency and it has increase d associate productivity For more information see the Neiman Marcus case study 13 IoT Backends The benefits that a serverless architecture brings to web and mobile apps make it easy to construct IoT backends and device based analytic processing systems that seamlessly scale with the number of devices For an example reference architecture see Serverless Reference Architecture for creating an IoT Backend on GitHub14 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 15 Customer Example – iRobot iRobot which makes robots such as the Roomba cleaning robot uses AWS Lambda in conjunction with the AWS IoT service to create a serverless backend for its IoT platform As a popular gift on any holiday iRobot experienc es increased traffic on these days While h uge traffic spikes could also mean huge headaches for the company and its customers alike iRobot’s engineering team doesn’t have to worry about managing infrastructure or manually writing code to handle availabi lity and scaling by running on serverless This enabl es them to innovate faster and stay focused on customers Watch the AWS re:Invent 2020 video Building the next generation of residential robots for more information 15 Data Processing The largest serverless applications process massive volumes of data much of it in real time Typical serverless data processing architectures use a combination of Amazon Kinesis and AWS Lambda to process streaming d ata or they combine Amazon S3 and AWS Lambda to trigger computation in response to object creation or update events When workloads require more complex orchestration than a simple trigger developers can use AWS Step Functions to create stateful or long running workflows that invoke one or more Lambda functions as they progress To learn more about serverless data processing architectures see the following on GitHub: • Serverless R eference Architecture for Real time Stream Processing16 • Serverless Reference Architecture for Real time File Processing17 • Image Recognition and Processing Backend reference architecture18 Customer Example – FINRA The Financial Industry Regulatory Authority (FINRA) u sed AWS Lambda to build a serverless data processing solution that enables them to perform half a trillion data validations on 37 billion stock market events daily In his talk at AWS re:Invent 2016 entitled The State of Serverless Computing (SVR311) 19 Tim Griesbach Senior Director at FINRA said “We found that Lambda was going to provide us with the best solution for this serverless cloud This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 16 solution With Lambda the system was faster cheaper and more scalable So at the end of the day we’ve reduced our costs by over 50 percent and we can track it daily even hourly ” Customer Example – Toyota Connected Toyota Connected is a subsidiary of Toyota and a technology company offering connected platform s big data mobility services and other automotive related services Toyota Connected chose server less computing architecture to build its Toyota Mobility Services Platform leveraging AWS Lambda Amazon Kinesis Data Streams (Amazon KDS) and Amazon S3 to offer personalized localized and predictive data to enhance the driving experience With its se rverless architecture Toyota Connected seamlessly scaled to 18 times its usual traffic volume with 18 billion transactions per month running through the platform reducing aggregation job times from 15+ hours to 1/40th of the time while reducing operatio nal burden Additionall y serverless enabled Toyota Connected to deploy the same pipeline in other geographies with smaller volumes and only pay for the resources consumed For more information read our Big Data Blog on Toyota Connected or watch the re:Invent 2020 video Reimagining mobility with Toyota Connected (AUT303) 20 21 Big Data AWS Lambda is a perfect match for many highvolume parallel processing workloads For an example of a reference architecture using MapReduce see Reference Architecture for running serverless MapReduce jobs 22 Customer Example – Fannie Mae Fannie Mae a leading source of financing for mortgage lenders uses AWS Lambda to run an “embarrassingly parallel ” workload for its financial modeling Fannie Mae uses Monte Carlo simulation processes to project future cash flows of mortgages that help manage mortgage risk The company found that its existing HPC grids were no longer meeting its growing busi ness needs So Fannie Mae built its new platform on Lambda and the system successfully scaled up to 15000 concurrent function executions This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 17 during testing The new system ran one simulation on 20 million mortgages completed in 2 hours which is three times faster than the old system Using a serverless architecture Fannie Mae can run large scale Monte Carlo simulations effectively because it doesn’t pay for idle compute resources It can also speed up its computations by running multiple Lambda functions concurrently Fannie Mae also experienced shorter than typical time tomarket because they were able to dispense with server management and monitoring along with the ability to eliminate much of the complex code previously required to manage application sc aling and reliability See the Fannie Mae AWS Summit 2017 presentation SMC303: Real time Data Processing Using AWS Lambda23 for more information IT Automation Serverless approaches eliminate the overhead of managing servers making most infrastructure tasks including provisioning configuration management alarms/monitors and timed cron jobs easier to create and manage Customer Example – Autodesk Autodesk which makes 3D design and engineering software uses AWS Lambda to automate its AWS account creation and management processes across its engineering organization Autodesk estimates that it realized cost savings of 98 percent (factoring in estimated savings in labor hours spent provisioning accounts) It can now provision accounts in just 10 minutes instead of the 10 hours it took to provision with the previous infrastructure based process The serverless solution enables Autodesk to a utomatically provision accounts configure and enforce standards and run audits with increased automation and fewer manual touchpoints For more information see the Autodesk AWS Summit 2017 presentation SMC301: The State of Serverless Computing 24 Visit GitHub to see the Autodesk Tailor service25 Machine Learning You can use serverless services to capture store and preprocess data before feeding it to your machine learning model After training the model you can also This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 18 serve the model for prediction at scale for inference without providing or managing any infrastr ucture Customer Example – Genworth Genworth Mortgage Insurance Australia Limited is a leading provider of lenders ’ mortgage insurance in Australia Genworth has more than 50 years of experience and data in this industry and wanted to use this historical information to train predictive analytics for loss mitigation machine learning models To achieve this task Genworth built a serverless machine learning pipeline at scale using services like AWS Glue a serverless managed ETL processing service to ingest and transform data and Amazon SageMaker to batch transform jobs and perform ML inference and process and publish the results of the analysis With the ML models Genworth could analyze recent repayment patterns for each insurance policy to prioritize t hem in likelihood and impact for each claim This process was automate d endtoend to help the business make data driven decisions and simplify high value manual work performed by the Loss Mitigation team Read the Machine Learning blog How Genworth built a serverless ML pipeline on AWS using Amazon SageMaker and AWS Glue for more information26 Conclusion Serverless approaches are designed to tackle two classic IT management problems: idle servers and operating fleets of servers that distract and detract from the business of creating differentiated customer value AWS serverless offerings solve these long standing problems with a pay for value billing model and by eliminating the need to manage the underlying infrastructure AWS constantly scans patches and monitors the underlying infrastructure making these applications more secure and provides built in fault tolerance with minimal configuration needed for high availability As a result developers can focus on writing business logic rather than managing infrastructure allowing enterprises to reduce time to market while paying for only the resources co nsumed This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 19 Existing companies are gaining significant agility and economic benefits from adopting serverless architectures and e nterprises should consider serverless first strategy for building cloud native microservices To learn more and read whitepapers on related topics see Serverless Computing and Applications 27 Contributors The following individuals and organizations contributed to this document: • Tim Wagner General Manager of AWS Serverless Applicatio ns Amazon Web Services • Paras Jain Technical Account Manager Amazon Web Services • John Lee Solutions Architect Amazon Web Services • Diego Magalh ães Principal Solutions Architect Amazon Web Services Further Reading For additional information see the following: • Architecture Best Practices for Serverless 28 • AWS Ramp Up Guide: Serverless29 Reference Architectures • Web Applications30 • Mobile Backends 31 • IoT Backends32 • File Processing33 • Stream Processing34 • Image Recognition Processing35 • MapReduce36 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 20 Document Revisions Date Description October 2017 First publication September 2021 Content refresh 1 https://wwwperlecom/articles/the costsavings ofcloud computing 40191237shtml 2 https://wwwgartnercom/en/newsroom/press releases/2021 0628gartner saysworldwide iaaspublic cloud services market grew 407percent in2020 3 https://d39w7f4ix9f5s9cloudfrontnet/e3/79/42bf75c94c279c67d777f002051f/ carbon reduction opportunity ofmoving toawspdf 4 Occupy the Clo ud: Eric Jonas et al Distributed Computing for the 99% https://arxivorg/abs/170204024 5 https://awsamazoncom/aws costmanagement/aws costoptimization/right sizing/ 6 https://docsawsamazoncom/lambda/latest/dg/lambda releaseshtml 7 https://serverlesslandcom/patterns 8 https://awsamazoncom/partners 9 https://awsamazoncom/serverless/ 10 https://d0awsstaticcom/whitepapers/AWS_Serverless_Multi Tier_Architecturespdf 11 https://githubcom/awslabs/lambda refarch webapp 12 https://githubcom/awslabs/lambda refarch mobilebackend 13 https://awsamazoncom/solutions/case studies/neimanmarcus case study 14 https://githubcom/awslabs/lambda refarch iotbackend Notes This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ optimizingenterpriseeconomicswithserverless/ optimizingenterpriseeconomicswith serverlesshtmlOptimizing Enterprise Economics with Serverless Architectures Page 21 15 https://wwwyoutubecom/watch?v= 1PDC6UOFtE 16 https://githubcom/awslabs/lambda refarch streamprocessing 17 https://githubcom/awslabs/lambda refarch fileprocessing 18 https://githubcom/awslabs/lambda refarch imagerecognition 19 https://wwwyoutubecom/watch?v=AcGv3qUrRC4&feature=youtube&t=1153 20 https://awsamazoncom/blogs/big data/enhancing customer safety by leveraging thescalable secure andcostoptimized toyota connected data lake/ 21 https://wwwyoutubecom/watch?v=IpuRyJY3B4k 22 https://githubcom/awslabs/lambda refarch mapreduce 23 https://wwwslidesharenet/AmazonWebServices/smc303 realtime data processing using awslambda/28 24 https:/ /wwwslidesharenet/AmazonWebServices/smc301 thestate of serverless computing 75290821/22 25 https://githubcom/alanwill/aws tailor 26 https://awsamazoncom/blogs/machine learning/how genworth builta serverless mlpipeline onawsusing amazon sagemaker andawsglue/ 27 https://awsamazoncom/serverless/ 28 https://awsamazoncom/architecture/serverless/ 29 https://d1awsstaticcom/training andcertification/ramp up_guides/Ramp Up_Guide_Serverlesspdf?svrd_rr1 30 https://githubcom/awslabs/lambda refarch webapp 31 https://githubcom/awslabs/lambda refarch mobilebackend 32 https://githubcom/awslabs/lambda refarch iotbackend 33 https://githubcom/awslabs/lambda refarch fileprocessing 34 https://githubcom/awslabs/lambda refarch streamprocessing 35 https://githubcom/awslabs/lambda refarch imagerecognition 36 https://githubcom/awslabs/lambda refarch mapreduce
General
Amazon_Aurora_MySQL_Database_Administrators_Handbook_Connection_Management
This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Aurora MySQL Database Administrato r’s Handbook Connection Management First Published January 2018 Updated October 20 2021 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlContents Introduction 1 DNS endpoints 2 Connection handling in Aurora MySQL and MySQL 2 Common misconceptions 4 Best practices 5 Using smart drivers 5 DNS caching 7 Connection management and pooling 7 Connection scaling 9 Transaction management and autocommit 10 Connection handshakes 12 Load balancing with the reader endpoint 12 Designing for fault tolerance and quick recovery 13 Server configuration 14 Conclusion 16 Contributors 16 Further reading 16 Document revisions 17 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAbstract This paper outlines the best practices for managing database connections setting server connection parameters and configuring client programs drivers and connectors It’s a recommended read for Amazon Aurora MySQL Database Administrators (DBAs) and application developers This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 1 Introduction Amazon Aurora MySQL (Aurora MySQL) is a managed relational database engine wirecompatible with MySQL 56 and 57 Most of the drivers connectors and tools that you currently use with MySQL can be used with Aurora MySQL with little or no change Aurora MySQL database (DB) clusters provide advanced fe atures such as: • One primary instance that supports read/write operations and up to 15 Aurora Replicas that support read only operations Each of the Replicas can be automatically promoted to the primary role if the current primary instance fails • A cluster endpoint that automatically follows the primary instance in case of failover • A reader endpoint that includes all Aurora Replicas and is automatically updated when Aurora Replicas are added or removed • Ability to create custom DNS endpoints contain ing a user configured group of database instances within a single cluster • Internal server connection pooling and thread multiplexing for improved scalability • Near instantaneous database restarts and crash recovery • Access to near realtime cluster metada ta that enables application developers to build smart drivers connecting directly to individual instances based on their read/write or read only role Client side components (applications drivers connectors and proxies) that use sub optimal configurati on might not be able to react to recovery actions and DB cluster topology changes or the reaction might be delayed This can contribute to unexpected downtime and performance issues To prevent that and make the most of Aurora MySQL features AWS encourag es Database Administrators (DBAs) and application developers to implement the best practices outlined in this whitepaper This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 2 DNS endpoints An Aurora DB cluster consists of one or more instances and a cluster volume that manages the data for those instances There are two types of instances: • Primary instance – Supports read and write statements Currently there can be one primary instance per DB cluster • Aurora Replica – Supports read only statements A DB cluster can have up to 15 Aurora Replicas The Auror a Replicas can be used for read scaling and are automatically used as failover targets in case of a primary instance failure Amazon Aurora supports the following types of Domain Name System (DNS) endpoints: • Cluster endpoint – Connects you to the primary instance and automatically follows the primary instance in case of failover that is when the current primary instance is demoted and one of the Aurora Replicas is promoted in its place • Reader endpoint – Includes all Aurora Replicas in the DB cluster und er a single DNS CNAME You can use the reader endpoint to implement DNS round robin load balancing for read only connections • Instance endpoint – Each instance in the DB cluster has its own individual endpoint You can use this endpoint to connect directly to a specific instance • Custom endpoints – User defined DNS endpoints containing a selected group of instances from a given cluster For more information refer to the Overview of Amazon Aurora page Connection handling in Aurora MySQL and MySQL MySQL Community Edition manages connections in a one thread perconnection fashion This means that each individual user connection receives a dedicated operating system thread in the mysqld process Issues with this type of connection handling include: • Relatively high memory use when there is a large number of user connections even if the connections are completely idle • Higher internal server contention and context switching overhead when working with thousands of user connections This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 3 Aurora MySQL supports a thread pool approach that addresses these issues You can characterize the thread pool approach as follows: • It uses thread multiplexing where a number of worker threads can switch between user sessions (connections) A worker thread is not fixe d or dedicated to a single user session Whenever a connection is not active (for example is idle waiting for user input waiting for I/O and so on) the worker thread can switch to another connection and do useful work You can think of worker threads as CPU cores in a multi core system Even though you only have a few cores you can easily run hundreds of programs simultaneously because they're not all active at the same time This highly efficient approach means that Aurora MySQL can handle thousands of concurrent clients with just a handful of worker threads • The thread pool automatically scales itself The Aurora MySQL database process continuously monitors its thread pool state and launches new workers or destroys existing ones as needed This is tr ansparent to the user and doesn’t need any manual configuration Server thread pooling reduces the server side cost of maintaining connections However it doesn’t eliminate the cost of setting up these connections in the first place Opening and closing c onnections isn't as simple as sending a single TCP packet For busy workloads with short lived connections (for example keyvalue or online transaction processing (OLTP) ) consider using an application side connection pool The following is a network pack et trace for a MySQL connection handshake taking place between a client and a MySQL compatible server located in the same Availability Zone: 04:23:29547316 IP client32918 > servermysql: tcp 0 04:23:29547478 IP servermysql > client32918: tcp 0 04:23:29547496 IP client32918 > servermysql: tcp 0 04:23:29547823 IP servermysql > client32918: tcp 78 04:23:29547839 IP client32918 > servermysql: tcp 0 04:23:29547865 IP client32918 > servermysql: tcp 191 04:23:29547993 IP servermysql > client329 18: tcp 0 04:23:29548047 IP servermysql > client32918: tcp 11 04:23:29548091 IP client32918 > servermysql: tcp 37 04:23:29548361 IP servermysql > client32918: tcp 99 04:23:29587272 IP client32918 > servermysql: tcp 0 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 4 This is a packet trace for closing the connection: 04:23:37117523 IP client32918 > servermysql: tcp 13 04:23:37117818 IP servermysql > client32918: tcp 56 04:23:37117842 IP client32918 > servermysql: tcp 0 As you can see even the simple act of opening and closing a single connection involves an exchange of several network packets The connection overhead becomes more pronounced when you consider SQL statements issued by drivers as part of connection setup (for example SET variable_name = value commands used to set session level configuration) Server side thread pooling doesn’t eliminate this type of overhead Common misconceptions The following are common misconceptions for database connection management • If the server uses connection pooling you don’t need a pool on the application side As explained previously this isn’t true for workloads where connections are opened and torn down very frequently and clients run relatively few statements per connectio n You might not need a connection pool if your connections are long lived This means that connection activity time is much longer than the time required to open and close the connection You can run a packet trace with tcpdump and see how many packets yo u need to open or close connections versus how many packets you need to run your queries within those connections Even if the connections are long lived you can still benefit from using a connection pool to protect the database against connection surges that is large bursts of new connection attempts • Idle connections don’t use memory This isn’t true because the operating system and the database process both allocate an in memory descriptor for each user connection What is typically true is that Auror a MySQL uses less memory than MySQL Community Edition to maintain the same number of connections However memory usage for idle connections is still not zero even with Aurora MySQL The general best practice is to avoid opening significantly more connect ions than you need This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 5 • Downtime depends entirely on database stability and database features This isn’t true because the application design and configuration play an important role in determining how fast user traffic can recover following a database event For more details refer to the Best practices section of this whitepaper Best practices The following are best practices for managing database connections and configuring connection drivers and pools Using smart drivers The cluster and reader endpoints abstract the role changes (primary instance promotion and demotion) and topology changes (addition and removal of instances) occurring in the DB cluster However DNS updates are not instantaneous In addition they can sometimes contribute to a slightly longer delay between the time a database event occurs and the time it’s noticed and handled by the application Aurora MySQL exposes near realtime metadata about DB instances in the INFORMATION_SCHEMAREPLICA_HOST_STATUS table Here is an example of a query against the metadata table: mysql> select server_id if(session_id = 'MASTER_SESSION_ID' 'writer' 'reader' ) as role replica_lag_in_milliseconds from information_schemareplica_host_status; + + + + | server_id | role | replica_lag_in_milliseconds | + + + + | aurora nodeusw2a | writer | 0 | | aurora nodeusw2b | reader | 19253999710083008 | + + + + 2 rows in set (000 sec) Notice that the table contains cluster wide metadata You can query the table on any instance in the DB cluster For the purpose of this whitepaper a smart driver is a database driver or connector with the ability to read DB cluster topology from the metadata table It can rou te new connections to individual instance endpoints without relying on high level cluster This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 6 endpoints A smart driver is also typically capable of load balancing read only connections across the available Aurora Replicas in a round robin fashion The MariaDB Connector/J is an example of a third party Java Database Connectivity (JDBC) smart driver with native support for Aurora MySQL DB clusters Application developers can draw inspiration from the MariaDB driver to build drivers and connectors for languages other than Java Refer to the MariaDB Connector/J page for details The AWS JDBC Driver for MySQL (preview) is a client driver designed for the high availability of Aurora MySQL The AWS JDBC Driver for MySQL is drop in compatible with the MySQL Connector/J driver The AWS JDBC Driver for MySQL takes full advantage of the failover capabilities of Aurora MySQL The AWS JDBC Driver for MySQL fully maintains a cache of the DB cluster topology and each DB in stance's role either primary DB instance or Aurora Replica It uses this topology to bypass the delays caused by DNS resolution so that a connection to the new primary DB instance is established as fast as possible Refer to the AWS JDBC Driver for MySQL GitHub repository for details If you’re using a smart driver the recommendations listed in the following sections still apply A smart driver can automate and abstract certain layers of database connectivity However it doesn’t automatically configure itself with optimal settings or automatically make the application resilient to failures For example when using a smart driver you still need to ensure that the connection val idation and recycling functions are configured correctly there’s no excessive DNS caching in the underlying system and network layers transactions are managed correctly and so on It’s a good idea to evaluate the use of smart drivers in your setup Note that if a third party driver contains Aurora MySQL –specific functionality it doesn’t mean that it has been officially tested validated or certified by AWS Also note that due to the advanced builtin features and higher overall complexity smart driver s are likely to receive updates and bug fixes more frequently than traditional (bare bones) drivers You should regularly review the driver’s release notes and use the latest available version whenever possible This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 7 DNS caching Unless you use a smart databas e driver you depend on DNS record updates and DNS propagation for failovers instance scaling and load balancing across Aurora Replicas Currently Aurora DNS zones use a short Time ToLive (TTL) of five seconds Ensure that your network and client confi gurations don’t further increase the DNS cache TTL Remember that DNS caching can occur anywhere from your network layer through the operating system to the application container For example Java virtual machines (JVMs) are notorious for caching DNS in definitely unless configured otherwise Here are some examples of issues that can occur if you don’t follow DNS caching best practices: • After a new primary instance is promoted during a failover applications continue to send write traffic to the old insta nce Data modifying statements will fail because that instance is no longer the primary instance • After a DB instance is scaled up or down applications are unable to connect to it Due to DNS caching applications continue to use the old IP address of tha t instance which is no longer valid • Aurora Replicas can experience unequal utilization for example one DB instance receiving significantly more traffic than the others Connection management and pooling Always close database connections explicitly inst ead of relying on the development framework or language destructors to do it There are situations especially in container based or code asaservice scenarios when the underlying code container isn’t immediately destroyed after the code completes In su ch cases you might experience database connection leaks where connections are left open and continue to hold resources (for example memory and locks) If you can’t rely on client applications (or interactive clients) to close idle connections use the server’s wait_timeout and interactive_timeout parameters to configure idle connection timeout The default timeout value is fairly high at 28800 seconds ( 8 hours) You should tune it down to a value that’s acceptable in your environment Refer to the MySQL Reference Manual for details Consider using connection pooling to protect the database against connection surges Also consider connection pooling if the appli cation opens large numbers of connections (for example thousands or more per second) and the connections are short lived that is the time required for connection setup and teardown is significant compared to the This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 8 total connection lifetime If your develo pment framework or language doesn’t support connection pooling you can use a connection proxy instead Amazon RDS Proxy is a fully managed highly available database proxy for Amazon Relational Database Service (Amazon RDS) that makes applications more scalable more resilient to database failures and more secure ProxySQL MaxScale and ScaleArc are examples of third party proxies compatible with the MySQL protocol Refer to the Connection scaling section of this document for more notes on connection pools versus proxies By using Amazon RDS Proxy you can allow your applications to pool and share database connections to improve their ability to scale Amazon RDS Proxy make s applications more resilient to database failures by automatically connecting to a standby DB instance while preserving application connections AWS recommend s the following for configuring connection pools and proxies: • Check and validate connection healt h when the connection is borrowed from the pool The validation query can be as simple as SELECT 1 However in Amazon Aurora you can also use connection checks that return a different value depending on whether the instance is a primary instance (read/wri te) or an Aurora Replica (read only) For example you can use the @@innodb_read_only variable to determine the instance role If the variable value is TRUE you're on an Aurora Replica • Check and validate connections periodically even when they're not borrowed It helps detect and clean up broken or unhealthy connections before an application thread attempts to use them • Don't let connections remain in the pool indefinitely Recycle connections by closing and reopening them periodically (for example ev ery 15 minutes) which frees the resources associated with these connections It also helps prevent dangerous situations such as runaway queries or zombie connections that clients have abandoned This recommendation applies to all connections not just idl e ones This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 9 Connection scaling The most common technique for scaling web service capacity is to add or remove application servers (instances) in response to changes in user traffic Each application server can use a database connection pool This approach ca uses the total number of database connections to grow proportionally with the number of application instances For example 20 application servers configured with 200 database connections each would require a total of 4000 database connections If the app lication pool scales up to 200 instances (for example during peak hours) the total connection count will reach 40000 Under a typical web application workload most of these connections are likely idle In extreme cases this can limit database scalabil ity: idle connections do take server resources and you’re opening significantly more of them than you need Also the total number of connections is not easy to control because it’s not something you configure directly but rather depends on the number of application servers You have two options in this situation: • Tune the connection pools on application instances Reduce the number of connections in the pool to the acceptable minimum This can be a stop gap solution but it might not be a long term solut ion as your application server fleet continues to grow • Introduce a connection proxy between the database and the application On one side the proxy connects to the database with a fixed number of connections On the other side the proxy accepts applicat ion connections and can provide additional features such as query caching connection buffering query rewriting/routing and load balancing Connection proxies • Amazon RDS Proxy is a fully managed highly available database proxy for Amazon RDS that makes applications more scalable more resilient to database failures and more secure Amazon RDS Proxy reduces the memory and CPU overhead for connection management on the database • Using Amazon RDS Proxy you can handle unpredictable surges in database traffic that otherwise might cause issues due to oversubscribing connections or creating new connections at a fast rate To protect the database against oversubscription you can control the number of database connections that are created This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 10 • Each RDS proxy performs connection pooling for the writer instance of its associated Amazon RDS or Aurora database Connection pooling is an optimization that reduces the overhead associated with opening and closing connections and with keeping many connections ope n simultaneously This overhead includes memory needed to handle each new connection It also involves CPU overhead to close each connection and open a new one such as Transport Layer Security/Secure Sockets Layer (TLS/SSL) handshaking authentication ne gotiating capabilities and so on Connection pooling simplifies your application logic You don't need to write application code to minimize the number of simultaneous open connections Connection pooling also cuts down on the amount of time a user must w ait to establish a connection to the database • To perform load balancing for read intensive workloads you can create a read only endpoint for RDS proxy That endpoint passes connections to the reader endpoint of the cluster That way your proxy connectio ns can take advantage of Aurora read scalability • ProxySQL MaxScale and ScaleArc are examples of third party proxies compatible with the MySQL protocol For even greater scalability and availability you can use multiple proxy instances behind a single D NS endpoint Transaction management and autocommit With autocommit enabled each SQL statement runs within its own transaction When the statement ends the transaction ends as well Between statements the client connection is not in transaction If you need a transaction to remain open for more than one statement you explicitly begin the transaction run the statements and then commit or roll back the transaction With autocommit disabled the connection is always in transaction You can commit or roll back the current transaction at which point the se rver immediately opens a new one Refer to the MySQL Reference Manual for details Running with autocommit disabled is not recommended because it encourages long running transactions where they’re not needed Open transactions block a server’s internal garbage collection mechanisms which are essential to maintaini ng optimal performance In extreme cases garbage collection backlog leads to excessive storage consumption elevated CPU utilization and query slowness This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 11 Recommendations : • Always run with autocommit mode enabled Set the autocommit parameter to 1 on the database side (which is the default) and on the application side (which might not be the default) • Always double check the autocommit settings on the application side For example Python drivers such as MySQLdb and PyMySQL disable autocommit by default • Manage transactions explicitly by using BEGIN/START TRANSACTION and COMMIT/ROLLBACK statements You should start transactions when you need them and commit as soon as the transactional work is done Note that these recommendations are not specific to Aurora MySQL They apply to MySQL and other databases that use the InnoDB storage engine Long transactions and garbage collection backlog are easy to monitor: • You can obtain the metadata of currently running transactions from the INFORMATION_SCHEMAINNODB_TRX table The TRX_STARTED column contains the transaction start time and you can use it to calculate transaction age A transaction is worth investigating if it has been running for several minutes or more Refer to the MySQL Reference Manua l for details about the table • You can read the size of the garbage collection backlog from the InnoDB’s trx_rseg_history_len counter in the INFORMATION_SCHEMAINNODB_METRICS table Refer to the MySQL Reference Manual for details about the table The larger the counter value is the more severe the impact might be in terms of query performance CPU usage and storage consumption Values in the range of tens of thousands indicate that the garbage collection is somewhat delayed Values in the range of millions or tens of millions might be dangerous and should be investigated Note – In Amazon Aurora all DB instances use the same storage volume which means that the garbage collection is cluster wide and not specific to each instance Consequently a runaway transaction on one instance can impact all instances Therefore you sho uld monitor long transactions on all DB instances This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 12 Connection handshakes A lot of work can happen behind the scenes when an application connector or a graphical user interface (GUI) tool opens a new database session Drivers and client tools commonly run series of statements to set up session configuration (for example SET SESSION variable = value ) This increases the cost of creating new connections and delays when your application can start issuing queries The cost of connection handshakes becomes even more important if your applications are very sensitive to latency OLTP or keyvalue workloads that expect single digit millisecond latency can be visibly impacted if each connection is expensive to open For example if the driver runs six statements to set up a connection and each statement takes just one millisecond to run your application will be delayed by six milliseconds before it issues its first query Recommendations : • Use the Aurora MySQL Advanced Au dit the General Query Log or network level packet traces (for example with tcpdump ) to obtain a record of statements run during a connection handshake Whether or not you’re experiencing connection or latency issues you should be familiar with the inte rnal operations of your database driver • For each handshake statement you should be able to explain its purpose and describe its impact on queries you'll subsequently run on that connection • Each handshake statement requires at least one network roundtrip and will contribute to higher overall se ssion latency If the number of handshake statements appears to be significant relative to the number of statements doing actual work determine if you can disable any of the handshake statements Consider using connection pooling to reduce the number of c onnection handshakes Load balancing with the reader endpoint Because the reader endpoint contains all Aurora Replicas it can provide DNS based round robin load balancing for new connections Every time you resolve the reader endpoint you'll get an inst ance IP that you can connect to chosen in round robin fashion DNS load balancing works at the connection level (not the individual query level) You must keep resolving the endpoint without caching DNS to get a different instance IP on each resolution I f you only resolve the endpoint once and then keep the connection in This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 13 your pool every query on that connection goes to the same instance If you cache DNS you receive the same instance IP each time you resolve the endpoint You can use Amazon RDS Proxy to create additional read only endpoints for an Aurora cluster These endpoints perform the same kind of load balancing as the Aurora reader endpoint Applications can reconnect more quickly to the proxy endpoints than the Aurora reader endpoint if reader in stances become unavailable If you don’t follow best practices these are examples of issues that can occur: • Unequal use of Aurora Replicas for example one of the Aurora Replicas is receiving most or all of the traffic while the other Aurora Replicas sit idle • After you add or scale an Aurora Replica it doesn’t receive traffic or it begins to receive traffic after an unexpectedly long delay • After you remove an Aurora Replica applications continue to send traffic to that instance For more information refer to the DNS endpoints and DNS caching sections of this document Designing for fault tolerance and quick recovery In large scale database operations you’re statistically more likely to experience issues such as connection interruptions or hardware failures You must also take operational actions more frequently such as scaling adding or removing DB instances and performing software upgrades The only scalable way of addressi ng this challenge is to assume that issues and changes will occur and design your applications accordingly Examples : • If Aurora MySQL detects that the primary instance has failed it can promote a new primary instance and fail over to it which typically h appens within 30 seconds Your application should be designed to recognize the change quickly and without manual intervention • If you create additional Aurora Replicas in an Aurora DB cluster your application should automatically recognize the new Aurora Replicas and send traffic to them • If you remove instances from a DB cluster your application should not try to connect to them This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 14 Test your applications extensively and prepare a list of assumptions about how the application should react to database events Then experimentally validate the assumptions If you don’t follow best practices database events (for example failovers scaling and software upgrades) might result in longer than expected downtime For example you might notice that a failover took 30 seconds (per the DB cluster’s event notifications) but the application remained down for much longer Server configuration There are two major server configuration variables worth mentioning in the context of this whitepaper : max_connections and max_connect_errors Configuration variable max_connections The configuration variable max_connections limits the number of database connections per Aurora DB instance The best practice is to set it slightly higher than the maximum number of connections you expect to open on each instance If you also enabled performance_schema be extra careful with the setting The Performance Schema memory structures are sized automatically based on server configuration variables including max_connections The higher you set the variable the more memory Performance Schema uses In extreme cases this can lead to out of memory issues on smaller instance types Note for T2 and T3 instance families Using Performance Schema on T2 and T3 Aurora DB instances with less than 8 GB of memory isn’t recommended To reduce the risk of out ofmemory issues on T2 and T3 instances: • Don’t enable Performance Schema • If you must use Performance Schema leave max_connections at the default value • Disable Performance Schema if you plan to increase max_connections to a value significantly greater than the default value Refer to the MySQL Reference Manual for details about the max_connections variable This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 15 Configuration variable max_connect_errors The configuration variable max_connect_errors determines how many successive interrupted connection requests are permitted from a given client host If the client host exceeds the number of successive failed connection attempts the server blocks it Further connection attempts from that client yield an error: Host 'host_name' is blocked because of many connection errors Unblock with 'mysqladmin flush hosts' A com mon (but incorrect) practice is to set the parameter to a very high value to avoid client connectivity issues This practice isn’t recommended because it: • Allows application owners to tolerate connection problems rather than identify and resolve the underl ying cause Connection issues can impact your application health so they should be resolved rather than ignored • Can hide real threats for example someone actively trying to break into the server If you experience “host is blocked” errors increasing t he value of the max_connect_errors variable isn’t the correct response Instead investigate the server’s diagnostic counters in the aborted_connects status variable and the host_cache table Then use the information to identify and fix clients that run in to connection issues Also note that this parameter has no effect if skip_name_resolve is set to 1 (default) Refer to the MySQL Reference Manual for details on the following: • Max_connect_errors variable • “Host is blocked ” error • Aborted_connects status variable • Host_cache table This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 16 Conclusion Understanding and implementing connection management best practices is critical to achieve scalability reduce downtime and ensure smooth integration between the application and database layers You can apply most of the recommendations provided in this whitepaper with little to no engineering effort The guidance provided in this whitepaper should help you introduce improvements in your current and future application deployments using Aurora MySQL DB clusters Contributor s Contributors to this document include: • Szymon Komendera Database Engineer Amazon Aurora • Samuel Selvan Database Specialist Solutions Architect Amazon Web Services Further reading For additional information refer to : • Aurora on Amazon RDS User Guide • Communication Errors and Aborted Connections in MySQL Reference Manual This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 17 Document revisions Date Description October 20 2021 Minor content updates to follow new style guide and hyperlinks July 2021 Minor content updates to the following topics: Smart Drivers Connection Management and Pooling and Connection Scaling March 2019 Minor content updates to the following topics: Introduction DNS Endpoints and Server Configuration January 2018 First publication
General
Secure_Content_Delivery_with_CloudFront
Secure Content Delivery with Amazon CloudFront Improve the Security and Performance of Your Applications While Lowering Your Content Delivery Costs November 2016 This paper has been archived For the latest technical content about secure content delivery with Amazon CloudFront see https://docsawsamazoncom/whitepapers/latest/secure contentdeliveryamazoncloudfront/securecontentdelivery withamazoncloudfronthtml Archived © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Contents Introduction 1 Enabling Easy SSL/TLS Adoption 2 Using Custom SSL Certificates with SNI Custom SSL 3 Meeting Requirements for PCI Compliance and Industry Standard Apple iOS ATS 4 Improving Performance of SSL/TLS Connections 5 Terminating SSL Connections at the Edge 6 Supporting Session Tickets and OCSP Stapling 6 Balancing Security and Performance with Half Bridge and Full Bridge TLS Termination 7 Ensuring Asset Availability 8 Making SSL/TLS Adoption Economical 8 Conclusion 9 Further Reading 9 Notes 11 Archived Abstract As companies respond to cybercrime compliance requirements and a commitment to securing customer data their adoption of Secure Sockets Layer/Transport Layer Security (SSL/TLS) protocols increases This whitepaper explains how Amazon CloudFront improves the security and performance of your APIs and applications while helping you lower your content delivery costs It focuses on three specific benefits of using CloudFront: easy SSL adoption with AWS Certificate Manager (ACM) and Server Name Indication (SNI) Custom SSL support improved SSL performance with SSL termination available at all CloudFront edge locations globally and economical adoption of SSL thanks to free custom SSL certificates with ACM and SNI support at no additional charge ArchivedAmazon Web Services – Secure Content Delivery with Amazon CloudFront Page 1 of 11 Introduction The adoption of Secure Sockets Layer/Transport Layer Security (SSL/TLS) protocols to encrypt Internet traffic has increased in response to more cybercrime compliance requirements (PCI v32) and a commitment to secure customer data A survey of the top 140000 websites revealed that more than 40 percent were secured by SSL 1 As measured by Alexa (an amazoncom company) 32 percent of the top million URLs were encrypted using HTTPS (also called HTTP over TLS HTTP over SSL and HTTP Secure) in September 20162 an increase of 45 percent from the same month in 2015 Amazon CloudFront is moving in this direction with a rapidly increasing share of global content traffic on CloudFront delivered over SSL/TLS CloudFront integrates with AWS Certificate Manager (ACM) for SSL/TLSlevel support to ensure secure data transmission using the most modern ciphers and handshakes Figure 1 shows how this secure content delivery works Figure 1: Secure content delivery with CloudFront and the AWS Certificate Manager SSL/TLS on CloudFront offers these key benefits (summarized in Table 1) :  Ease of use  Improved performance ArchivedAmazon Web Services – Secure Content Delivery with Amazon CloudFront Page 2 of 11  Lower costs The integration of CloudFront with ACM reduces the time to s et up and deploy SSL/TLS certificates and translates to improved HTTPS availability and performance Finally certificates and encrypted data rates are offered at very low charge These benefits are discussed in detail in the following sections Table 1: Summary of the key benefits of SSL/TLS on CloudFront Ease of Use Improved Performance Lower Costs Integrated with ACM  Procurement of new certificate directly from CloudFront console  Automatic certificate distribution globally  Automatic certificate renewal Revocation management SNI Custom SSL support Support for standards (eg Apple iOS ATS and PCI) SSL management in AWS environment HTTPS capability at all global edge locations SSL/TLS termination close to viewers Latency reduction with Session Tickets and OCSP stapling Free custom SSL/TLS certificate with ACM SNI Custom SSL/TLS at no additional charge No setup fees no hosting fees and no extra charges for the HTTPS bytes transferred Standard (or discounted with a signed contract) CloudFront rates for data transfer and HTTPS requests Enabling Easy SSL/TLS Adoption All browsers have the capability to interact with secured web servers using the SSL/TLS protocol However both browser and server need an SSL certificate to establish a secure connection Support for SSL certificate management requires working with a Certificate Authority (CA) which is a thirdparty that is trusted by both the subject of the certificate (eg the content owner) and the party that relies on the certificate (eg the content viewer) The entire manual process of purchasing uploading and renewing valid certificates through thirdparty CAs can be quite lengthy AWS provides seamless integration between CloudFront and ACM to reduce the creation and deployment time of a new free custom SSL certificate and make certificate management a simpler more automatic process as shown in Figure 2 ArchivedAmazon Web Services – Secure Content Delivery with Amazon CloudFront Page 3 of 11 Custom SSL certificates allow you to deliver secure content using your own domain name (eg www examplecom) Although it typically takes a couple of minutes for a certificate to be issued after receiving approval it could take longer3 Once a certificate is issued or imported into ACM it is immediately available for use via the CloudFront console and automatically propagated to the global network of CloudFront edge locations when it is associated with distributions ACM automatically handles certificate renewal which makes configuring and maintaining SSL/TLS for your secure website or application easier and less error prone than by using a manual process In turn this help s you avoid downtime due to misconfigured revoked or expired certificates ACMprovided certificates are valid for 13 months and renewal starts 60 days prior to expiration If a certificate is compromised it can be revoked and replaced via ACM at no additional charge AWS ensures that private keys are never exported which removes the need to secure and track them Figure 2: CloudFront integration with ACM Using SSL Certificates with SNI Custom SSL You can use your own SSL certificates with CloudFront at no additional charge with Server Name Indication (SNI) Custom SSL SNI is an extension of the TLS protocol that provides an efficient way to deliver content over HTTPS using your ArchivedAmazon Web Services – Secure Content Delivery with Amazon CloudFront Page 4 of 11 own domain and SSL certificate SNI identifies the domain without the server having to examine the request body so it can offer the correct certificate during the TLS handshake SNI is supported by most modern browsers including Chrome 60 and later Safari 30 and later Firefox 20 and later and Internet Explorer 7 and later4 (If you need to support older browsers and operating systems you can use the CloudFront dedicated IPbased custom SSL for an additional charge) Meeting Requirements for PCI Compliance and Industry Standard Apple iOS ATS You can leverage the combination of ACM SNI and CloudFront security features to help meet the requirements of many compliance and regulatory standards such as PCI Additionally CloudFront has “out ofthe box” support f or the industry standard Apple iOS App Transport Security (ATS) For more information on CloudFront security capabilities see Table 2 and Table 3 Table 2: Overview of CloudFront security capabilities Vulnerability CloudFront Security Capabilit ies Cryptographic attacks CloudFront frequently reviews the latest security standards and supports only viewer requests using SSL v3 and TLS v10 11 and 12 When available TLS v13 will also be supported CloudFront supports the strongest ciphers (ECDHE RSA AES128 GCM SHA256) and offers them to the clie nt in preferential sequence Export ciphers are not supported Patching Dedicated teams are responsible for monitoring the threat landscape handling security events and patching software Under t he shared security model AWS will take the necessary meas ures to remediate vulnerabilities with methods such as patching deprecation and revocation DDoS attacks CloudFront has extensive mitigation techniques for standard flood type attacks against SSL To thwart SSL renegotiation type attacks CloudFront dis ables renegotiation Table 3 : Amazon CloudFront support of Apple iOS ATS requirements Apple iOS ATS Requirement CloudFront Support TLS/SSL version must be TLS 12 CloudFront supports TLS 12 ArchivedAmazon Web Services – Secure Content Delivery with Amazon CloudFront Page 5 of 11 Apple iOS ATS Requirement CloudFront Support TLS Cipher Suite must be from the following with Perfect Forward Secrecy : CloudFront supports Perfect Forward Secr ecy with the following ciphers: ECDSA Certificates: RSA Certificates: TLS_ECDHE_ECDSA_WITH_AES_ 256_GCM_ SHA384 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 TLS_ECDH E_ECDSA_WITH_AES_128_GCM_ SHA256 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 TLS_ECDH E_ECDSA_WITH_AES_256_CBC_SHA384 TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 TLS_E CDHE_ECDSA_WITH_AES_256_CBC_SHA TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 TLS_ECDH E_ECDSA_WITH_AES_128_CBC_SHA256 TLS_ECDHE_RSA_WITH_AES _128_CBC_SHA TLS_E CDHE_ECDSA_WITH_AES_128_CBC_SHA RSA Certificates: TLS_ECDHE_RSA_WITH_AES_256_G CM_SHA384 TLS_EC DHE_RSA_WITH_AES_128_GCM_SHA256 TLS_EC DHE_RSA_WITH_AES_256_CBC_SHA384 TLS_EC DHE_RSA_WITH_AES_128_CBC_SHA256 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA Leaf server certs must be signed with the following : Server certificates signed with the following type of key: Rivest Shamir Adleman (RSA) key with a length of at least 2048 bits Rivest Shamir Adleman (RSA) key with a length of 2048 bits Elliptic Curve Cryptography (ECC) key with a size of at least 256 bits Improving Performance of SSL/TLS Connections You may see a degradation in the performance of your API or application when clients connect directly to your origin servers using SSL Setting up an SSL/TLS connection adds up to three round trips between the client and server introducing additional latency in the connection setup Once the connection is established additional CPU resources are required to encrypt the data that is transmitted ArchivedAmazon Web Services – Secure Content Delivery with Amazon CloudFront Page 6 of 11 Terminating SSL Connections at the Edge When you enable SSL with CloudFront all global edge locations are used for handling your SSL traffic Clients terminate SSL connections at a nearby CloudFront edge location thus reducing network latency in setting up an SSL connection In addition moving the SSL termination to CloudFront helps you offload encryption to CloudFront servers that are specifically designed to be highly scalable and performance optimized These factors boost the performance of not only static content but also dynamic content For example Slack improved its performance when it migrated the delivery of its dynamic content to HTTPS with CloudFront The worldwide average response time to slackcom dropped from 488 milliseconds to 199 milliseconds (see Figure 3) A large portion of these performance benefits came from the decreased SSL negotiation time as the worldwide average for SSL connection times decreased from 215 milliseconds to 52 milliseconds Figure 3: Slack improved its performance by delivering its dynamic content via HTTPS with CloudFront Supporting Session Tickets and OCSP Stapling CloudFront further improves the performance of SSL connections with the support of Session Tickets and Online Certificate Status Protocol (OCSP) stapling (see Figure 4) Session Tickets help decrease the time spent restarting or resuming an SSL session CloudFront encrypts SSL session information and ArchivedAmazon Web Services – Secure Content Delivery with Amazon CloudFront Page 7 of 11 stores it in a ticket that the client can use to resume a secure connection instead of repeating the SSL handshake process OCSP stapling improves the time taken for individual SSL handshakes by moving the OSCP check (a call used to obtain the revocation status of an SSL certificate) from the client to a periodic secure check by the CloudFront servers With OCSP stapling the CloudFront engineering team measured up to a 30 percent performance improvement in the initial connection between the client and the server Figure 4: Session Tickets decrease the time spent restarting or resuming an SSL session Balancing Security and Performance with Half Bridge and Full Bridge TLS Termination With CloudFront you can strike a balance between security and performance by choosing between half bridge and full bridge TLS termination (see Figure 5) By defining different cache behaviors in the same distribution you can define which connections to the origin use HTTPS and which use HTTP You can configure objects that need secure connections to the origin to use HTTPS (eg login pages sensitive data) and configure objects that do not need secure connections to use HTTP (eg logos images) Thus everything can be securely transmitted to the client and origin fetches can be optimized to use HTTP to reduce the overall latency of the transaction ArchivedAmazon Web Services – Secure Content Delivery with Amazon CloudFront Page 8 of 11 Figure 5: Balancing security and performance on the same distribution For full secure delivery you can configure CloudFront to require HTTPS for communication between viewers and CloudFront and optionally between CloudFront and your origin5 Also you can configure CloudFront to require viewers to interact with your content over an HTTPS connection using the HTTP to HTTPS Redirect feature When you enable HTTP to HTTPS Redirect CloudFront will respond to an HTTP request with a 301 redirect response that requires the viewer to resend the request over HTTPS Ensuring Asset Availability CloudFront puts significant focus on and dedication to maintaining the availability of your assets Availability is calculated based on how often an attempt was made to download a single object and how often the download failed As shown in Table 4 CloudFront SSL availability (as measured from real clients) across multiple regions is consistently high when compared to other top CDNs6 Table 4 : SSL /TLS traffic – availability by geography for July 2016 to August 2016 # CDN United States Europe Japan Korea 1 CloudFront SSL 9914 9935 9935 9922 2 CDN A 9870 9753 9864 9898 3 CDA B 9677 9444 9167 9819 Making SSL/TLS Adoption Economical CloudFront enables you to generate custom SSL/TLS certificates with ACM and support them with SNI at no additional charge These features are offered with ArchivedAmazon Web Services – Secure Content Delivery with Amazon CloudFront Page 9 of 11 no setup fees no hosting fees and no extra charges for the HTTPS bytes transferred You simply pay standard (or discounted with a signed contract) CloudFront rates for data transfer and HTTPS requests For more information see the Amazon C loudFront pricing page 7 For dedicated IP custom SSL there is an additional charge per month This additional charge is associated with dedicating multiple IP v4 addresses (a finite resource) for each SSL certificate at each CloudFront edge location Conclusion You can deliver your secure APIs or applications via SSL/TLS with Amazon CloudFront in an easy way at no additional charge and with improved SSL performance You can create free custom SSL/TLS certificates with AWS ACM in minutes and immediately add them to your CloudFront distributions at no additional charge with automatic SNI support You don’t have to manage certificate renewal because ACM takes care of it automatically and if any certificate is compromised you can revoke it and replace it via ACM You can do all of this while benefiting from improved SSL/TLS performance because of SSL/TLS terminations near your end user and CloudFront support of Session Tickets and OCSP stapling This also applies if you want to deliver dynamic content as CloudFront provides a way to increase performance and security at no additional charge Further Reading There is a wealth of information available in the following whitepapers blog posts user guides presentations and slides to help customers get a deeper understanding of CloudFront ACM and how SSL is used Amazon CloudFront Custom SSL  Amazon CloudFront Custom SSL  List of browsers supported by SNI Custom SSL AWS Certificate Manager ArchivedAmazon Web Services – Secure Content Delivery with Amazon CloudFront Page 10 of 11  Getting started  Managed certificate renewal  FAQs Blogs  Amazon CloudFront What’s New  HTTP and TLS v11 v12 to the origin  AWS Certificate Manager – Deploy SSL/TLSBased Apps on AWS Developers Guide  Introduction to Amazon CloudFront  Using an HTTPS Connection to Access Your Objects Slack Performance Improvement with Amazon CloudFront  Video  Slides re:Invent Presentations  SSL with Amazon Web Services (SEC316) 11/2014  Using Amazon CloudFront For Your Websites & Apps STG206 10/2015  Secure Content delivery Using Amazon CloudFront STG205 10/2015 re:Invent Slides  Secure Content Delivery Using Amazon CloudFront and AWS WAF ArchivedAmazon Web Services – Secure Content Delivery with Amazon CloudFront Page 11 of 11 Notes 1 https://wwwtrustworthyinternetorg/sslpulse/ 2 http://httparchiveorg/trendsphp#perHttps 3 https://awsamazoncom/certificatemanager/faqs/ 4 https://enwikipediaorg/wiki/Server_Name_Indication 5 http://docsawsamazoncom/AmazonCloudFront/latest/DeveloperGuide/Secu reConnectionshtml#SecureConnectionsHowToRequireCustomProcedure 6 http://wwwcedexiscom/getthedata/country report/?report=secure_object_delivery_response_time 7 https://awsamazoncom/cloudfront/pricing/ Archived
General
The_Total_Cost_of_Non_Ownership_of_a_NoSQL_Database_Cloud_Service
ArchivedThe Total Cost of (Non) Ownership of a NoSQL Database Cloud Service Jinesh Varia and Jose Papo March 2012 This paper has been archived To find the latest technical content about the AWS Cloud go to the AWS Whitepapers & Guides page on the AWS website: https://awsamazoncom/whitepapers/
General
File_Gateway_for_Hybrid_Cloud_Storage_Architectures
This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers File Gateway for Hybrid Cloud Storage Architectures Overview and Best Practices for the File Gateway Configuration of the AWS Storage Gateway Service March 2019 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents AWS’s current product offerings and practices which are subject to change without notice and (c) does not create any commitments or assu rances from AWS and its affiliates suppliers or licensors AWS’s products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied AWS’s responsibilities and liabilities to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 201 9 Amazon Web Services Inc or its affiliates All rights reserved This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Contents Introduction 1 File Gateway Architecture 1 File to Object Mapping 2 Read/Write Operations and Local Cache 4 Choosing the Right Cache Resources 6 Security and Access Controls Within a Local Area Network 6 Monitoring Cache and Traffic 7 File Gateway Bucket Inventory 7 Amazon S3 and the File Gateway 10 File Gateway Use Cases 12 Cloud Tiering 13 Hybrid Cloud Backup 13 Conclusion 15 Contributors 15 Further Reading 15 Document Revisions 15 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Abstract Organizations are looking for ways to reduce their physical data center footprints particularly for storage arrays used as secondary file backup or on demand workloads However providing data services that bridge private data centers and the cloud comes with a unique set of challenges Traditional data center storage services rely on low latency network attached storage (NAS) and storage area network (SAN) protocols to access storage locally Cloud native applications are generally optimized for API acces s to data in scalable and durable cloud object storage such as Amazon Simple Storage Service (Amazon S3) This paper outlines the basic architecture and best practices for building hybrid cloud storage environments using the AWS Storage Gateway in a file gateway configuration to address key use cases such as cloud tiering hybrid cloud backup distribution and cloud processing of data generated by on premises applications This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cloud Storage Architectures Page 1 Introduction Organizations are looking for ways to reduce their physical data center infrastructure A great way to start is by moving secondary or tertiary workloads such as long term file retention and backup and re covery operations to the cloud In addition organ izations want to take advantage of the elasticity of cloud architectures and features to access and use their data in new on demand ways that a traditional data center infrastructure can’t support AWS Storage Gateway has multiple gateway types including a file gateway that provides lowlatency Network File System (NFS) and Server Message Block (SMB) access to Amazon Simple Storage Service (Amazon S3) objects from on premises applications At the same time customers can access that data from any Amazon S 3 APIenabled application Configuring AWS Storage Gateway as a file gateway enables hybrid cloud storage architectures in use cases such as archiving on demand bursting of workloads and backup to the AWS Cloud Individual files that are written to Amazo n S3 using the file gateway are stored as independent objects This provides high durability lowcost flexible storage with virtually infinite capacity Files are stored as objects in Amazon S3 in their original format without any proprietary modificatio n This means that data is readily available to data analytics and machine learning applications and services that natively integrate with Amazon S3 buckets such as Amazon EMR Amazon Athena or Amazon Trans cribe It also allows for storage management through native Amazon S3 features such as lifecycle policies analytics and crossregion replication (CRR) A file gateway communicates efficiently between private data centers and AWS Traditional NAS protocols (SMB and NFS) are trans lated to object storage API calls This makes file gateway an ideal component for organizations looking for tiered storage of file or backup data with lowlatency local access and durable storage in the cloud File Gateway Architecture A file gateway provides a simple solution for presenting one or more Amazon S3 buckets and their objects as a mountable NFS or SMB file share to one or more clients onpremises The file gateway is deployed as a virtual machine in VMware ESXi or Microsoft Hyper V environments on premises or in an Amazon Elastic Compute Cloud (Amazon EC2) instance in AWS File gateway can also be deployed in data center and remote office locations on a Stora ge Gateway hardware appliance When deployed file gateway provides a seamless connection between onpremises NFS (v30 or v41) or SMB (v1 or v2) client s—typically application s—and Amazon S3 buckets hosted in a given AWS Region The file gateway employs a local read/write cache to provide a lowlatency This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cloud Storage Architectures Page 2 access to data for file share clients in the same local area network (LAN) as the file gateway A bucket share consists of a file share hosted from a file gateway across a single Amazon S3 bucket The file gateway virtual machine appliance currently supports up to 10 bucket shares Figure 1: Basic file gateway architecture Here are the components of the fi le gateway architecture shown in Figure 1 : 1 Clients access objects as files using an NFS or SMB file share exported through an AWS Storage Gateway in the file gateway configuration 2 Expandable read/write cache for the file gateway 3 File gateway virtual appliance 4 Amazon S3 which provides persistent object storage for all files that are written using the file gateway File to Object Mapping After deploy ing activat ing and configur ing the file gateway one or more bucket shares can be presented to clients that support NFS v3 or v41 protocols or mapped to a share via SMB v1 or v2 protocols on the local LAN Each share (or mount point) on the gateway is paired to a single bucket and the contents of the bucket are available as files and folders in the share Writing an individual file to a share on the file gateway creates an identically named object in the associated bucket All newly created objects are written to Amazon S3 Standard Amazon S3 Standard – Infrequent Access ( S3 Standard – IA) or Amazon S3 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cl oud Storage Architectures Page 3 One Zone – Infrequent Access ( S3 One Zone – IA) storage classes depending on the configuration of the share The Amazon S3 key name of a newly created object is identical to the full path of the file that is written to the moun t point in AWS Storage Gateway Figure 2: Files stored over NFS on the file gateway mapping to Amazon S3 objects One difference between storing data in Amazon S3 versus a traditional file system is the way in which granular permi ssions and metadata are implemented and stored Access to files stored directly in Amazon S3 is secured by policies stored in Amazon S3 and AWS Identity and Access Management (IAM) All other attributes such as storage class and creation date are stored in a given object’s metadata When a file is accessed over NFS or SMB the file permissions folder permissions and attributes are stored in the file system To reliably persist file permissions and attributes the file gateway stores this information as part of Amazon S3 object metadata If the permissions are changed on a file over NFS or SMB the gateway modifies the metadata of the associated objects that are stored in Amazon S3 to reflect the changes Custom default UNIX permissions are defined for all existing S3 objects within a bucket when a share is created from the AWS Management Console or using the file gateway API This feature lets you create NFS or SMB enabled shares from buckets with existing content without having to manually assign permissions after you create the share The following is an example of a file that is stored in a share bucket and is listed from a Linux based client that is mounting the share bucket over NFS The example shows that the file “file1txt” has a mod ification date and standard UNIX file permissions [e2user@host]$ ls l /media/filegateway1/ total 1 rwrwr 1 ec2user ec2 user 36 Mar 15 22:49 file1txt [e2user@host]$ This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cloud Storage Architectures Page 4 The following example shows the output from the head object on Amazon S3 It shows the same file from the perspective of the object that is stored in Amazon S3 Note that the permissions and time stamp in the previous example are stored durably as metadata for the object [e2user@host]$ aws s3api head object bucket filegateway1 key file1txt { "AcceptRanges": "bytes" "ContentType": "application/octet stream" "LastModified": "Wed 15 Mar 2017 22:49:02 GMT" "ContentLength": 36 "VersionId": "93XCzHcBUHBSg2yP8yKMHzxUumhovEC" "ETag": " \"0a7fb5dbb1a e1f6a13c6b4e4dcf54977 1\"" "ServerSideEncryption": "AES256" "Metadata": { "filegroup": "500" "useragentid": "sgw 7619FB1F" "fileowner": "500" "awssgw": "57c3c3e92a7781f868cb10020b33aa6b2859d58c86819066 1bcceae87f7b96f1" "filemtime": "1489618141421" "filectime": "1489618141421" "useragent": "aws storagegateway" "filepermissions": "0664" } } [e2user@host]$ Read/Write Operations and Local Cache As part of a file gateway deployment dedicated local storage is allocated to provide a read/write cache for all hosted share buckets The read/write cache greatly improves response times for onpremises file (NFS/SMB) operations The local cache holds both recently wr itten and recently read content and does not proactively evict data while the cache disk has free space However when the cache is full AWS Storage Gateway evicts data based on a least recently used (LRU) algorithm Recently accessed data is available fo r reads and write operations are not impeded Read Operations (Read Through Cache) When an NFS client performs a read request the file gateway first checks the local cache for the requested data If the data is not in the cache the gateway retrieves the This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cloud Storage Architectures Page 5 data from Amazon S3 using Range GET requests to minimize data transferred over the Internet while repopulating the read cache on behalf of the client 1 The NFS /SMB client performs a read request on part of a given file 2 The file gateway first checks to see if required bytes are cached locally 3 In the event the bytes are not in the loca l cache the file gateway performs a byte range GET on the associated S3 object Figure 3: File gateway read operations Write Operations (Write Back Cache) When a file is written to the file gateway over NFS /SMB the gateway first commits the write to the local cache At this point the write success is acknowledged to the local NFS/SMB client taking full advantage of the low latency of the local area network After the write cache is populated the file is transferred to the associated Amazon S3 bucket asynchronously to increase local performance of Internet transfers When an existing file is modified the file gateway transfers only the newly written bytes to the associated Amazon S3 bucket This uses Amazon S 3 API calls to construct a new object from a previous version in combination with the newly uploaded bytes This reduces the amount of data required to be transferred when clients modify existing files within the file gateway This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cloud Storage Architectures Page 6 1 File share client performs many parallel writes to a given file 2 File gateway appliance acknowledges writes synchronously aggregates writes locally 3 File gateway appliance uses S3 multi part upload to send new writes (bytes) to S3 4 New object is constructed in S3 from a combination of new uploads and byte ranges from the previous version of an object Figure 4: File gateway write operations Choosing the Right Cache Resources When configuring a file gateway VM on a host machine you can allocate disks for the local cache Selecting a cache size that can sufficiently hold the active working set (eg a Database backup file) provide s optimal performance for file share clients Addit ionally splitting the cache across multiple disks maximize s throughput by parallelizing access to storage resulting in faster reads and writes When available for your on premises gateway we also recommend using SSD or ephemeral disks which can provide write and read (cache hits) throughputs of up to 500MB /s Security and Access Controls Within a Local Area Network When you creat e a mount point (share) on a deployed gateway you select a single Amazon S3 bucket to be the persistent object storage for files and associated metadata Default UNIX permissions are defined a s part of the configuration of the mount point These permissions are applied to all existing objects in the Amazon S3 bucket This process ensures that clients that access the mount point adhere to file and directory level security for existing content In addition an entire mount point and its associated Amazon S3 content can be protected on the LAN by limiting mount access to individual hosts or a range of hosts This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cloud Storage Architectures Page 7 For NFS file shares this limitation is defined by using a Classless Inter Domain Routing (CIDR) block or individual IP addresses For SMB file shares you can control access using Active Directory (AD) domains or authenticated guest access You can further limit a ccess to selected AD users and groups allowing only specified users (or users in the specified groups) to map the file share as a drive on their Microsoft Windows machines Monitoring Cache and Traffic As workloads or architectures evolve t he cache and Internet requirements that are associated with a given file gateway deployment can change over time To give visibility into resource use the file gateway provides statistical information in the form of Amazon CloudWatch metrics The metrics cover cache consumptio n cache hits/misses data transfer and read/write metrics For more information see Monitoring Your File Share File Gateway Bucket Inventory To re duce both latency and the number of Amazon S3 operations when performing list operations the file gateway stores a local bucket inventory that contains a record of all recently listed objects The bucket inventory is populated on demand as the file share clients list parts of the file share for the first time The file gateway updates inventory records only when the gateway itself modifies deletes or creates new objects on behalf of the clients The file gateway cannot detect changes to objects in an NFS or SMB file share’s bucket by a secondary gateway that is associated with the same Amazon S3 bucket or by any other Amazon S3 API call outside of the file gateway When Amazon S3 objects have to be modified outside of the file share and recognized by the file gateway (such as changes made by Amazon EMR or other AWS services ) the bucket inventory must be refreshed using either the RefreshCache API call or RefreshCache AWS Command Line Interface (CLI) command RefreshCache can be manually invoked automate d using a CloudWatch Event or triggered through the use of the NotifyWhenUploaded API call once the files have been written to the file share using a secondary gateway A CloudWatch notification named Storage Gatew ay Upload Notification Event is triggered once the files written by the secondary gateway have been uploaded to S3 The target of this event could be a Lambda function invoking RefreshCache to inform the primary gateway of this change RefreshCache reinventories the existing records in a file gateway’s bucket inventory This communicates changes of known objects to the file share clients that access a given share This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Clou d Storage Architectures Page 8 1 Object created by secondary gateway or external source 2 RefreshCache API called on file g ateway appliance share 3 Foreign object is reflected in file gateway bucket inventory and accessible by clients Figure 5: RefreshCache API called to re inventory Amazon S3 bucket Bucket Shares with Multiple Contributors When deploying more c omplex architectures such as when more than one file gateway share is associated with a single Amazon S3 bucket or in scenarios where a single bucket is modified by one or more file gateways in conjunction with other Amazon S3 enabled app lications note that file gateway does not support object locking or file coherency across file gateways Since file gateways cannot detect other file gateways be cautious when designing and deploy ing solutions that use more than one file gateway share wi th the same Amazon S3 bucket File gateways associated with the same Amazon S3 bucket detect new changes to the content in the bucket only in the following circumstances: 1 A file gateway recognizes changes it makes to the associated Amazon S3 bucket and ca n notify other gateways and applications by invoking the NotifyWhenUploaded API after it is done writing files to the share 2 A file gateway recognize s changes made to objects by other file gateways when the affected objects are located in folders (or prefixes) that have not been queried by that particular file gateway 3 A file gateway recognizes changes in an associated Amazon S3 bucket (bucket share) m ade by other contributors after the RefreshCache API is executed We recommend that you use the read only mount option on a file gateway share when you dep loy multiple gateways that have a common Amazon S3 bucket Designing architectures with only one writer and many readers is the simplest way to avoid write conflicts If multiple writers are required the clients accessing each gateway must be This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cloud Storage Architectures Page 9 tightly cont rolled to ensure that they don’t write to the same objects in the shared Amazon S3 bucket When multiple file gateways are accessing the same objects in the same Amazon S3 bucket make sure to call the RefreshCache API on file gateway shares that have to recognize changes made by other file gateways To fu rther optimize this operation and reduce the time it takes to run you can invoke the RefreshCache API on specific folders (recursively or not) in your share 1 Client creates a new file and file gateway #1 uploads object to S3 2 Customer invokes NotifyWhenUploaded API on file share of file gateway #1 3 CloudWatch Event (generated upon completion of Step 1 ) initiate s the RefreshCache API call to initiate a re inventory on file gateway #2 4 File gateway #2 presents newly created objects to clients Figure 6: RefreshCache API makes objects created by file gateway #1 visible to file gateway #2 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cloud Storage Architectures Page 10 Amazon S3 and the Fi le Gateway The file gateway uses Amazon S3 buckets to provide storage for each mount point (share) that is created on an individual gateway When you use Amazon S3 buckets mount points provide limitless capacity 99999999999% durability on objects stored and a consumption based pricing model Costs for data stored in Amazon S3 via AWS Storage Gateway are based on the region where the gateway is located and the storage class A given mount point writes data directly to Amazon S3 Standard Amazon S3 Standa rd – IA or Amazon S3 One Zone – IA storage depending on the initial configuration select ed when creating the mount point All of these storage classes provide equal durability However Amazon S3 Standard – IA and Amazon S3 One Zone – IA have a different pricing model and lower availability (ie 999% compared with 9999%) which makes them good solution s for less frequently accessed objects The pricing for Amazon S3 Standard – IA and Amazon S3 One Zone – IA is ideal for objects that exist for more than 30 days and are larger than 128 KB per object For details about price differences for Amazon S3 storage classes see the Amazon S3 Pricing page Using Amazon S3 Object Lifecycle Management for Cost Optimization Amazon S3 offers many storage classes Today AWS Storage Gateway file gateway supports S3 Standard S3 Standard – Infrequent Access and S3 One Zone – IA natively Amazon S3 lifecycle policies automate the management of data across storage tiers It’s also possible to expire objects based on the object’s age To transition data between storage classes lifecycle policies are applied to an entire Amazon S3 bucket which reflects a single mount point on a storage gateway Lifecycle policies can also be applied to a specific prefix that reflects a folder within a hosted mount point on a file gateway The lifecycle policy transition condition is based on the creation date or optionally on the object tag key value pair For more information about tagging see Object Tagging in the Amazon S3 Developer Guide As an example a lifecycle policy in its simplest implementation move s all objects in a given Amazon S3 bucket from Amazon S3 Standard to Amazon S3 Standard – IA and finally to Amazon S3 Glacier as the data ages This means that files created by the file gateway are stored as objects in Amazon S3 buckets and can then be automatically transitioned to more economic al storage classes as the content ages This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cloud Storage Architectures Page 11 Figure 7: Example of f ile gateway storing files as objects in Amazon S3 Standard and transitioning to Amazon S3 Standard – IA and Amazon S3 Glacier If you use file gateway to store data in S3 Standard IA or S3 One Zone IA or acce ss data from any of the infrequent storage classe s see Using Storage Classes in the AWS Storage Gateway User Guide to learn how the gateway mediates between NFS/SMB (file based) uploads to update or access the object Transitioning Objects to Amazon S3 Glacier Files migrated using lifecycle policies are immediately available for NFS file read/write operations Objects transitioned to Amazon S3 Glacier are visible when NFS files are listed on the file gateway However they are not readable unless restored to an S3 storage class using an API or the Amazon S3 console If you try to read files that are stored as objects in Amazon S3 Glacier you encounter a read I/O error on the client that tries the read operation For this reason we recommend using lifecycle to transition files to Amazon S3 Glacier objects only for file content that does not require immediate access from an NFS /SMB client in an AWS Storag e Gateway environment Amazon S3 Object Replication Across AWS Regions Amazon S3 crossregion replication (CRR) can be combined with a file gateway architecture to store objects in two Amazon S3 buckets across two separate AWS Regions CRR is used for a va riety of use cases such as protection against human error protection against malicious destruction or to minimize latency to clients in a remote AWS Region Adding CRR to the file gateway architecture is just one example of how native Amazon S3 tools an d features can be used in conjunction with the file gateway This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cloud Storage Architectures Page 12 Figure 8: File gateway in a private data center with CRR to duplicate objects across AWS Regions Using Amazon S3 Object Versioning You can use f ile gateway with Amazon S3 Object Versioning to store multiple versions of files as they are modified If you require access to a previous version of the object using the gateway it first must be restore d to the previous version in S3 You must also use t he RefreshCache operation for the gateway to be notified of this restore See Object Versioning Might Affect What You See in Your Fil e System in the AWS Storage Gateway User Guide to learn more about using Amazon S3 versioned buckets for your file share Using the File Gateway for Write Once Read Many (WORM) Data You can also use f ile gateway to store and access data in environments with regulatory requirements that require use of WORM storage In this case select a bucket with S3 Object Lock enabled as the storage for the file share If there are file modifications or renames through the file share clients the file gateway creates a new version of the object without affecting prior versions so the original locked version remains unchanged See also Using the file gateway with Amazon S3 Object Lock in the AWS Storage Gateway User Guide File Gateway Use Cases The following scenarios demonstrate how a file gateway can be used in both cloud tiering and backup architectures This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cloud Storage Architectures Page 13 Cloud Tiering In on premises environments where storage resources are reaching capacity migrating colder data to the file gateway can extend the life span of existing storage on premises and reduce the need to use capital expenditure s on additional storage hardware and data center resources When adding the file gateway to an existing storage environment on premises applications can take advantage of Amazon S3 storage durability consumption based pricing and virtual infinite scale while ensuring low latency access to recently accessed data over NFS or SMB Data can be tiered using either native host OS tools or third party tools that integra te with standard file protocols such as NFS or SMB Figure 9: File gateway in a private data center providing Amazon S3 Standard or Amazon S3 Standard – IA as a complement to existing storage deployments Hybrid Cloud Backup The file gateway provides a low latency NFS /SMB interface that creates Amazon S3 objects of up t o 5 TiB in size stored in a supported AWS Region This makes it an ideal hybrid target for backup solutions that can use NFS or SMB By using a mixture of Amazon S3 storage classes data is stored on low cost highly durable cloud storage and automaticall y tiered to progressively lower cost storage as the likelihood of restoration diminishes Figure 10 shows an example architecture that assumes backups must retained for one year After 30 days the likelihood of restoration beco mes infrequent and after 60 days it becomes extremely rare In this solution you use Amazon S3 Standard as the initial location for backups for the first 30 days The backup software or scripts write backups to the file share preferably in the form of multi megabyte or larger size files Larger files offer better cost This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cloud Storage Architectures Page 14 optimization in the end toend solution including colder storage costs and lifecycle transition costs because fewer transitions are required After anoth er 30 days the backups are transitioned to Amazon S3 Glacier Here they are held until a full year has passed since they were first created at which point they are deleted 1 Client writes backups to file gateway over NFS or SMB 2 File gateway cache siz ed greater than expected backup 3 Initial backups stored in S3 Standard 4 Backups are transitioned to S3 Standard IA after 30 days 5 Backups are transitioned to S3 Glacier after 60 days Figure 10: Example of file gateway storing file s as objects in Amazon S3 Standard and transitioning to Amazon S3 Standard IA and Amazon S3 Glacier When sizing the file gateway cache in this type of solution understand the backup process itself One approach is to size the cache to be large enough to contain a complete full backup which allows restores from that backup to come directly from the cache —much more quickly than over a wide area network (WAN) link If the backup solution uses software that consolidates backup files by reading existing back ups before writing ongoing backups factor this configuration into the sizing of cache also This is because reading from the local cache during these types of operations reduces cost and increases overall performance of ongoing backup operations For both cases specified above you can use AWS DataSync to transfer data to the cloud from an onpremises data store From there the access to the data can be retain ed using a file gateway This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cloud Storage Architectures Page 15 Conclusion The file gateway configuration of AWS Storage Gateway provides a simple way to bridge data between private data centers and Amazon S3 storage The file gateway can enable hybrid architectures for cloud migration cloud tiering and hybrid cloud backup The file gat eway’s ability to provide a translation layer between the standard file storage protocol s and Amazon S3 APIs without obfuscation makes it ideal for architectures in which data must remain in its native format and be available both on premises and in the AWS Cloud For more information about the AWS Storage Gateway service see AWS Storage Gateway Contributors The following individuals and organizations contributed to this document: • Peter Levett Solut ions Architect AWS • David Green Solutions Architect AWS • Smitha Sriram Senior Product Manager AWS • Chris Rogers Business Development Manager AWS Further Reading For additional information see the following: • AWS Storage Services Overview Whitepaper • AWS Whitepapers Web page • AWS Storage G ateway Documentation • AWS Documentation Web page Document Revisions Date Description March 2019 Updated for S3 One Zone IA storage class This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cloud Storage Architectures Page 16 Date Description April 2017 Initial document creation
General
Overview_of_AWS_Security__Compute_Services
ArchivedOverview of AWS Security Compute Services June 2016 (Please c onsult http://awsamazoncom/security/ forthelatest versi onofthispaper) THIS PAPER HAS BEEN ARCHIVED For the latest technical content see https://docsawsamazoncom/security/Archived Page 2 of 8 © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’ current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’ products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Page 3 of 8 AWS ServiceSpecific Security Not only is security built into every layer of the AWS infrastructure but also into each of the services available on that infrastructure AWS services are architected to work efficiently and securely with all AWS networks and platforms Each service provides extensive security features to enable you to protect sensitive data and applications Compute Services Amazon Web Services provides a variety of cloudbased computing services that include a wide selection of compute instances that can scale up and down automatically to meet the needs of your application or enterprise Amazon Elastic Compute Cloud (Amazon EC2) Security Amazon Elastic Compute Cloud (EC2) is a key component in Amazon’s Infrastructure as a Service (IaaS) providing resizable computing capacity using server instances in AWS’ data centers Amazon EC2 is designed to make web scale computing easier by enabling you to obtain and configure capacity with minimal friction You create and launch instances which are collections of platform hardware and softwa re Multiple Levels of Security Security within Amazon EC2 is provided on multiple levels: the operating system (OS) of the host platform the virtual instance OS or guest OS a firewall and signed API calls Each of these items builds on the capabilities of the others The goal is to prevent data contained within Amazon EC2 from being intercepted by unauthorized systems or users and to provide Amazon EC2 instances themselves that are as secure as possible without sacrificing the flexibility in configuration that customers demand The Hypervisor Amazon EC2 currently utilizes a highly customized version of the Xen hypervisor taking advantage of paravirtualization (in the case of Linux guests) Because paravirtualized guests rely on the hypervisor to provide support for operations that normally require privileged access the guest OS has no elevated access to the CPU The CPU provides four separate privilege modes: 03 called rings Ring 0 is the most privileged and 3 the least The host OS executes in Ring 0 However rather than executing in Ring 0 as most operating systems do the guest OS runs in a lesser privileged Ring 1 and applications in the least privileged Ring 3 This explicit Archived Page 4 of 8 virtualization of the physical resources leads to a clear separation between guest and hypervisor resulting in additional security separation between the two Instance Isolation Different instances running on the same physical machine are isolated from each other via the Xen hypervisor AWS is active in the Xen community which provides awareness of the latest developments In addition the AWS firewall resides within the hypervisor layer between the physical network interface and the instance's virtual interface All packets must pass through this layer thus an instance ’s neighbors have no more access to that instance than any other host on the Internet and can be treated as if they are on separate physical hosts The physical RAM is separated using similar mechanisms Customer instances have no access to raw disk devices but instead are presented with virtualized disks In addition memory allocated to guests is scrubbed (set to zero) by the hypervisor when it is unallocated to a guest The memory is not returned to the pool of free memory available for new allocations until the memory scrubbing is complete AWS recommends customers further protect their data using appropriate means One common solution is to run an encrypted file system on top of the virtualized disk device: Figure 3: Amazon EC2 Multiple Layers of Security Host Operating System : Administrators with a business need to access the management plane are required to use multi factor authentication to gain access to purposebuilt administration hosts These administrative hosts are systems that are specifically designed built configured and hardened to protect the management plane of the cloud All such access is logged and audited When an employee no longer has a business need to access the management plane the privileges and access to these hosts and relevant systems can be revoked Archived Page 5 of 8 Guest Operating System : Virtual instances are completely controlled by you the customer You have full root access or administrative control over accounts services and applications AWS does not have any access rights to your instances or the guest OS AWS recommends a base set of security best practices to include disabling passwordonly access to your guests and utilizing some form of multifactor authentication to gain access to your instances (or at a minimum certificatebased SSH Version 2 access) Additionally you should employ a privilege escalation mechanism with logging on a peruser basis For example if the guest OS is Linux after hardening your instance you should utilize certificate based SSHv2 to access the virtual instance disable remote root login use commandline logging and use ‘sudo’ for privilege escalation You should generate your own key pairs in order to guarantee that they are unique and not shared with other customers or with AWS AWS also supports the use of the Secure Shell (SSH) network protocol to enable you to log in securely to your UNIX/Linux EC2 instances Authentication for SSH used with AWS is via a public/private key pair to reduce the risk of unauthorized access to your instance You can also connect remotely to your Windows instances using Remote Desktop Protocol (RDP) by utilizing an RDP certificate generated for your instance You also control the updating and patching of your guest OS including security updates AWSprovided Windows and Linuxbased AMIs are updated regularly with the latest patches so if you do not need to preserve data or customizations on your running Amazon AMI instances you can simply relaunch new instances with the latest updated AMI In addition updates are provided for the Amazon Linux AMI via the Amazon Linux yum repositories Firewall : Amazon EC2 provides a complete firewall solution; this mandatory inbound firewall is configured in a default denyall mode and Amazon EC2 customers must explicitly open the ports needed to allow inbound traffic The traffic may be restricted by protocol by service port as well as by source IP address (individual IP or Classless InterDomain Routing (CIDR) block) The firewall can be configured in groups permitting different classes of instances to have different rules Consider for example the case of a traditional threetiered web application The group for the web servers would have port 80 (HTTP) and/or port 443 (HTTPS) open to the Internet The group for the application servers would have port 8000 (application specific) accessible only to the web server group The group for the database servers would have port 3306 (MySQL) open only to the application server group All three groups would permit adm inistrative access on port 22 (SSH) but only from the customer’s corporate network Highly secure applications can be deployed using this expressive mechanism See diagram below: Archived Page 6 of 8 Figure 4: Amazon EC2 Securi ty Group Firewall The firewall isn’t controlled through the guest OS; rather it requires your X509 certificate and key to authorize changes thus adding an extra layer of security AWS supports the ability to grant granular access to different administrative functions on the instances and the firewall therefore enabling you to implement additional security through separation of duties The level of security afforded by the firewall is a function of which ports you open and for what duration and purpose The default state is to deny all incoming traffic and you should plan carefully what you will open when building and securing your applications Wellinformed traffic management and security design are still required on a per instance basis AWS further encourages you to apply additional perinstance filters with hostbased firewalls such as IPtables or the Windows Firewall and VPNs This can restrict both inbound and outbound traffic API Access: API calls to launch and terminate instances change firewall parameters and perform other functions are all signed by your Amazon Secret Access Key which could be either the AWS Accounts Secret Access Key or the Secret Access key of a user created with AWS IAM Without access to your Secret Access Key Amazon EC2 API calls cannot be made on your behalf In addition API calls can be encrypted with SSL to maintain confidentiality AWS recommends always using SSLprotected API endpoints Permissions: AWS IAM also enables you to further control what APIs a user has permissions to call Elastic Block Storage (Amazon EBS) Security: Amazon Elastic Block Storage (EBS) allows you to create storage volumes from 1 GB to 16 TB that can be mounted as devices by Archived Page 7 of 8 Amazon EC2 instances Storage volumes behave like raw unformatted block devices with user supplied device names and a block device interface You can create a file system on top of Amazon EBS volumes or use them in any other way you would use a block device (like a hard drive) Amazon EBS volume access is restricted to the AWS Account that created the volume and to the users under the AWS Account created with AWS IAM if the user has been granted access to the EBS operations thus denying all other AWS Accounts and users the permission to view or access the volume Data stored in Amazon EBS volumes is redundantly stored in multiple physical locations as part of normal operation of those services and at no additional charge However Amazon EBS replication is stored within the same availability zone not across multiple zones; therefore it is highly recommended that you conduct regular snapshots to Amazon S3 for longterm data durability For customers who have architected complex transactional databases using EBS it is recommended that backups to Amazon S3 be performed through the database management system so that distributed transactions and logs can be checkpointed AWS does not perform backups of data that are maintained on virtual disks attached to running instances on Amazon EC2 You can make Amazon EBS volume snapshots publicly available to other AWS Accounts to use as the basis for creating your own volumes Sharing Amazon EBS volume snapshots does not provide other AWS Accounts with the permission to alter or delete the original snapshot as that right is explicitly reserved for the AWS Account that created the volume An EBS snapshot is a blocklevel view of an entire EBS volume Note that data that is not visible through the file system on the volume such as files that have been deleted may be present in the EBS snapshot If you want to create shared snapshots you should do so carefully If a volume has held sensitive data or has had files deleted from it a new EBS volume should be created The data to be contained in the shared snapshot should be copied to the new volume and the snapshot created from the new volume Amazon EBS volumes are presented to you as raw unformatted block devices that have been wiped prior to being made available for use Wiping occurs immediately before reuse so that you can be assured that the wipe process completed If you have procedures requiring that all data be wiped via a specific method such as those detailed in DoD 522022 M (“National Industrial Security Program Operating Manual “) or NIST 800 88 (“Guidelines for Media Sanitization”) you have the ability to do so on Amazon EBS Encrypti on of sensitive data is general ly a good securi ty practice and AWS pro vides the ability to encry pt EBS vo lumes and their snapshots with AES256 The encryption o ccurs on the servers that host the EC2 instances providing encryption of data as it moves between EC2 instances and EBS storage In order to be able to do this efficiently and with low laten cy the EBS encryption feature is only available on EC2 's more powerful instance types (eg M 3 C3 R3 G2) Auto Scaling Security Auto Scaling allows you to automatically scale your Amazon EC2 capacity up or down according to conditions you define so that the number of Amazon EC2 instances you are using scales up Archived Page 8 of 8 seamlessly during demand spikes to maintain performance and scales down automatically during demand lulls to minimize costs Like all AWS services Auto Scaling requires that every request made to its control API be authenticated so only authenticated users can access and manage Auto Scaling Requests are signed with an HMAC SHA1 signature calculated from the request and the user’s private key However getting credentials out to new EC2 instances launched with Auto Scaling can be challenging for large or elastically scaling fleets To simplify this process you can use roles within IAM so that any new instances launched with a role will be given credentials automatically When you launch an EC2 instance with an IAM role temporary AWS security credentials with permissions specified by the role will be securely provisioned to the instance and will be made available to your application via the Amazon EC2 Instance Metadata Service The Metadata Service will make new temporary security credentials available prior to the expiration of the current active credentials so that valid credentials are always available on the instance In addition the temporary security credentials are automatically rotated multiple times per day providing enhanced security You can further control access to Auto Scaling by creating users under your AWS Account using AWS IAM and controlling what Auto Scaling APIs these users have permission to call Further Reading https://awsamazoncom/security/securityresources/ Introduction to AWS Security Processes Overview of AWS Security Storage Services Overview of AWS Security Database Services Overview of AWS Security Compute Services Overview of AWS Security Application Services Overview of AWS Security Analytics Mobile and Application Services Overview of AWS Security – Network Services
General
Best_Practices_for_Migrating_MySQL_Databases_to_Amazon_Aurora
ArchivedBest Practices for Migrating MySQL Databases to Amazon Aurora October 2016 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchived © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Contents Introduction 1 Basic Performance Considerations 1 Client Location 1 Client Capacity 3 Client Configuration 4 Server Capacity 4 Tools and Procedures 5 Advanced Performance Concepts 6 Client Topics 6 Server Topics 7 Tools 8 Procedure Optimizations 12 Conclusion 18 Contributors 18 Archived Abstract This whitepaper discusses some of the important factors affecting the performance of selfmanaged export/import operations in Amazon Relational Database Service (Amazon RDS) for MySQL and Amazon Aurora Although many of the topics are discussed in the context of Amazon RDS performance principles presented here also apply to the MySQL Community Edition found in selfmanaged MySQL installations Target Audience The target audience of this paper includes:  Database and system administrators performing migrations from MySQL compatible databases into Aurora where AWSmanaged migration tools cannot be used  Software developers working on bulk data import tools for MySQL compatible databases ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 1 Introduction Migrations are among the most timeconsuming tasks handled by database administrators (DBAs) Although the task becomes easier with the advent of managed migration services such as the AWS Database Migration Service (AWS DMS) many largescale database migrations still require a custom approach due to performance manageability and compatibility requirements The total time required to export data from the source repository and import it into the target database is one of the most important factors determining the success of all migration projects This paper discuss es the following major contributors to migration performance:  Client and server performance characteristics  The choice of migration tools; without the right tools even the most powerful client and server machines cannot reach their full potential  Optimized migration procedures to fully utilize the available client/server resources and leverage performanceoptimized tooling Basic Performance Considerations The following are basic considerations for client and server performance Tooling and procedure optimizations are described in more detail in “Tools and Procedures " later in this document Client Location Perform export/import operations from a client machine that is launched in the same location as the database server:  For onpremises database servers the client machine should be in the same onpremises network  For Amazon RDS or Amazon Elastic Compute Cloud (Amazon EC2) database instances the client instance should exist in the same Amazon Virtual Private Cloud (Amazon VPC) and Availability Zone as the server ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 2 For EC2Classic (nonVPC) servers the client should be located in the same AWS Region and Availability Zone Figure 1: Logical migration between AWS Cloud databases To follow the preceding recommendations during migrations between distant databases you might need to use two client machines:  One in the source network so that it’s close to the server you’re migrating from  Another in the target network so that it’s close to the server you’re migrating to In this case you can move dump files between client machines using file transfer protocols (such as FTP or SFTP) or upload them to Amazon Simple Storage Service (Amazon S3) To further reduce the total migration time you can compress files prior to transferring them ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 3 Figure 2: Data flow in a selfmanaged migration from onpremises to an AWS Cloud database Client Capacity Regardless of its location the client machine should have adequate CPU I/O and network capacity to perform the requested operations Although the definition of adequate varies depending on use cases the general recommendations are as follows:  If the export or import involves realtime processing of data for example compression or decompression choose an instance class with at least one CPU core per export/import thread  Ensure that there is enough network bandwidth available to the client instance We recommend using instance types that support enhanced networking For more information see the Enhanced Networking section in the Amazon EC2 User Guide 1  Ensure that the client’s storage layer provides the expecte d read/write capacity For example if you expect to dump data at 100 megabytes per second the instance and its underlying Amazon Elastic Block Store ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 4 (Amazon EBS) volume must be capable of sustaining at least 100 MB/s (800 Mbps) of I/O throughput Client Configuration For best performance on Linux client instances we recommend that you enable the receive packet steering (RPS) and receive flow steering (RFS) features To enable RPS use the following code sudo sh c 'for x in /sys/class/net/eth0/queues/r x*; do echo ffffffff > $x/rps_cpus; done' sudo sh c "echo 4096 > /sys/class/net/eth0/queues/rx 0/rps_flow_cnt" sudo sh c "echo 4096 > /sys/class/net/eth0/queues/rx 1/rps_flow_cnt To enable RFS use the following code sudo sh c "echo 32768 > /proc/sys/ net/core/rps_sock_flow_entries" Server Capacity To dump or ingest data at optimal speed the database server should have enough I/O and CPU capacity In traditional databases I/O performance often becomes the ultimate bottleneck during migrations Aurora addresses this challenge by using a custom distributed storage layer designed to provide low latency and high throughput under multithreaded workloads In Aurora you don’t have to choose between storage types or provision storage specifically for export/import purposes We recommend using Aurora for instances with one CPU core per thread for exports and two CPU cores per thread for imports If you’ve chosen an instance class with enough CPU cores to handle your export/import the instance should already offer adequate network bandwidth ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 5 For more information see “Server Topics ” later in this document Tools and Procedures Whenever possible perform export and import operations in multithreaded fashion On modern systems equipped with multicore CPUs and distributed storage this approach ensures that all available client/server resources are consumed efficiently Engineer export/import procedures to avoid unnecessary overhead The following table lists common export/import performance challenges and provides sample solutions You can use it do drive your tooling and procedure choices Import Technique Challenge Potential Solution Examples Single row INSERT statements Storage and SQL processing overhead Use multi row SQL statements Use non SQL format (eg CSV flat files) Import 1 MB of data per statement Use a set of flat files (chunks) 1 GB each Single row or multi row statements with small transaction size Transactional overhead each statement is committed separately Increase transaction size Commit once per 1000 statements Flat file imports with very large transaction size Undo management overhead Reduce transaction size Commit once per 1 GB of data imported Single threaded export/import Under utilization of server resources only one table is accessed at a time Export/import multiple tables in parallel Export from or load into 8 tables in parallel If you are exporting data from an active production database you have to find a balance between the performance of production queries and that of the export itself Execute export operations carefully so that you don ’t compromise the performance of the production workload This information is discussed i n more detail in the following section ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 6 Advanced Performance Concepts Client Topics Contrary to the popular opinion that total migration time depends exclusively on server performance data migrations can often be constrained by clientside factors It is important that you identify understand and finally address client side bottlenecks; otherwise you may not achieve the goal of reaching optimal import/export performance Client Location The location of the client machine is an important factor affecting data migrations performance benchmarks and day today database operations alike Remote clients can experience network latency ranging from dozens to hundreds of milliseconds Communication latency introduces unnecessary overhead to every database operation and can result in substantial performance degradation The performance impact of network latency is particularly visible during single threaded operations involving large amounts of short database statements With all statements executed on a single thread the statement throughput becomes the inverse of network latency yielding very low overall performance We strongly recommend that you perform all types of database activities from an Amazon EC2 instance located in the same VPC and Availability Zone as the database server For EC2Classic (non VPC) servers the client should be located in the same AWS Region and Availability Zone The reason we recommend that you launch client instances not only in the same AWS Region but also in the same VPC is that crossVPC traffic is treated as public and thus uses publicly routable IP addresses Because the traffic must travel through a public network segment the network path becomes longer resulting in higher communication latency ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 7 Client Capacity It is a common misconception that the specifications of client machines have little or no impact on export/import operations Although it is often true that resource utilization is higher on the server side it is still important to remember the following:  On small client instances multithreaded exports and imports can become CPUbound especially if data is compressed or decompressed on the fly eg when the data stream is piped through a compression tool like gzip  Multithreaded data migrations can consume substantial network and I/O bandwidth Choose the instance class and size and type of the underlying Amazon EBS storage volume carefully For more information see the Amazon EBS Volume Performance section in the Amazon EC2 User Guide 2 All operating systems provide diagnostic tools that can help you detect CPU network and I/O bottlenecks When investigating export/import performance issues we recommend that you use these tools and rule out clientside problems before digging deeper into server configuration Server Topics Serverside storage performance CPU power and network throughput are among the most important server characteristics affecting batch export/import operations Aurora supports pointandclick instance scaling that enables you to modify the compute and network capacity of your database cluster for the duration of the batch operations Storage Performance Aurora leverages a purposebuilt distributed storage layer designed to provide low latency and high throughput under multithreaded workloads You don't need to choose between storage volume types or provision storage specifically for export/import purposes ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 8 CPU Power Multithreaded exports/imports can become CPU bound when executed against smaller instance types We recommend using a server instance class with one CPU core per thread for exports and two CPU cores per thread for imports CPU capacity can be consumed efficiently only if the export/import is realized in multithreaded fashion Using an instance type with more CPU cores is unlikely to improve performance dump or import that is executed in a single thread Network Throughput Aurora does not use Amazon EBS volumes for storage As a result it is not constrained by the bandwidth of EBS network links or throughput limits of the EBS volumes However the theoretical peak I/O throughput of Aurora instances still depends on the instance class As a rule of thumb if you choose an instance class with enough CPU cores to handle the export/import (as discussed earlier) the instance should already offer adequate network performance Temporary Scaling In many cases export/import tasks can require significantly more compute capacity than day today database operations Thanks to the pointandclick compute scaling feature of Amazon RDS for MySQL and Aurora you can temporarily overprovision your instance and then scale it back down when you no longer need the additional capacity Note : Due to the benefits of the Aurora custom storage layer storage scaling is not needed before during or after exporting/imp orting data Tools With client and server machines located close to each other and sized adequately let ’s look at the different methods and tools you can use to actually move the data ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 9 Percona XtraBackup Aurora supports migration from Percona XtraBackup files stored in Amazon S3 Migrating from backup files can be significantly faster than migrating from logical schema and data dumps using tools such as mysqldump Logical imports work by executing SQL commands to recreate the schema and data from your source database which carries considerable processing overhead However Percona XtraBackup files can be ingested directly into an Aurora storage volume which removes the additional SQL execution cost A migration from Percona XtraBackup files involves three main steps: 1 Using the innobackupex tool to create a backup of the source database 2 Copying the backup to Amazon S3 3 Restoring the backup through the AWS RDS console You can use this migration method for source servers using MySQL versions 55 and 56 For more information and stepbystep instructions for migrating from Percona XtraBackup files see the Amazon Relational Database Service User Guide 3 mysqldump The mysqldump tool is perhaps the most popular export/import tool for MySQLcompatible database engines The tool produces dumps in the form of SQL files that contain data definition language (DDL) data control language (DCL) and data manipulation language (DML) statements The statements carry information about data structures data access rules and the actual data respectively In the context of this whitepaper two types of statements are of interest:  CREATE TABLE statements to create relevant table structures before data can be inserted ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 10  INSERT statements to populate tables with data Each INSERT typically contains data from multiple rows but the dataset for each table is essentially represented as a series of INSERT statements The mysqldump based approach introduces certain issues related to performance:  When used against managed database servers such as Amazon RDS instances the tool’s functionality is limited Due to privilege restrictions it cannot dump data in multiple threads or produce flatfile dumps suitable for parallel loading  The SQL files do not include any transaction control statements by default Consequently you have very little control over the size of database transactions used to import data This lack of control can lead to poor performance for example: o With autocommit mode enabled (default) each individual INSERT statement runs inside its own transaction The database must COMMIT frequently which increases the overall execution overhead o With autocommit mode disabled each table is populated using one massive transaction The approach removes COMMIT overhead but leads to side effects such as tablespace bloat and long rollback times if the import operation is interrupted Note: Work is in progress to provide a modern replacement for the legacy mysqldump tool The new tool called mysqlpump is expected to check most of the boxes on MySQL DBA’s performance checklist For more information see the MySQL Reference Manual 4 Flat Files As opposed to SQLformat dumps that contain data encapsulated in SQL statements flatfile dumps come with very little overhead The only control ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 11 characters are the delimiters used to separate individual rows and columns Files in commaseparated value (CSV) or tabseparated value (TSV) format are both examples of the flatfile approach Flat files are most commonly produced using:  The SELECT … INTO OUTFILE statement which dumps table contents (but not table structure) into a file located in the server’s local file system  mysqldump command with the tab parameter which also dumps table contents to a file and creates the relevant metadata files with CREATE TABLE statements The command uses SELECT … INTO OUTFILE internally so it also creates dump files on the server’s local file system Note : Due to privilege restrictions you cannot use the methods mentioned previously with managed database servers such as Amazon RDS However you can import flat files dumped from self managed servers into managed instances with no issues Flat files have two major benefits:  The lack of SQL encapsulation results in much smaller dump files and removes SQL processing overhead during import  Flat files are always created in fileper table fashion which makes it easy to import them in parallel Flat files also have their disadvantages For example the server would use a single transaction to import data from each dump file To have more control over the size of import transactions you need to manually split very large dump files into chunks and then import one chunk at a time ThirdParty Tools and Alternative Solutions The mydumper and myloader tools are two popular opensource MySQL export/import tools designed to address performance issues that are associated ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 12 with the legacy mysqldump program They operate on SQLformat dumps and offer advanced features such as:  Dumping and loading data in multiple threads  Creating dump files in fileper table fashion  Creating chunked dumps that is multiple files per table  Dumping data and metadata into separate files  Ability to configure transaction size during import  Ability to schedule dumps in regular intervals For more information about mydumper and myloader see the project home page5 Efficient exports and imports are possible even without the help of thirdparty tools With enough effort you can solve issues associated with SQLformat or flat file dumps manually as follows:  Solve singlethreaded mode of operations in legacy tools by running multiple instances of the tool in parallel However this does not allow you to create consistent databasewide dumps without temporarily suspending database writes  Control transaction size by manually splitting large dump files into smaller chunks Procedure Optimizations This section describes ways that you can handle some of the common export/import challenges ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 13 Choosing the Right Number of Threads for Multithreaded Operations As mentioned earlier a rule of thumb is to use one thread per server CPU core for exports and one thread per two CPU cores for imports For example you should use 16 concurrent threads to dump data from a 16core dbr34xlarge instance and 8 concurrent threads to import data into the same instance type Exporting and Importing Multiple Large Tables If the dataset is spread fairly evenly across multiple tables export/import operations are relatively easy to parallelize To achieve optimal performance follow these guidelines:  Perform export and import operations using multiple parallel threads To achieve this use a modern export tool such as mydumper described in “ThirdParty Tools and Alternative Solutions ”  Never use singlerow INSERT statements for batch imports Instead use multi row INSERT statements or import data from flat files  Avoid using small transactions but also don’t let each transaction become too heavy As a rule of thumb split large dumps into 500MB chunks and import one chunk per transaction Exporting and Importing Individual Large Tables In many databases data is not distributed equally across tables It is not uncommon for the majority of the data set to be stored in just a few tables or even a single table In this case the common approach of one export/import thread per table can result in suboptimal performance This is because the total export/import time depends on the slowest thread which is the thread that is processing the largest table To mitigate this you must leverage multithreading at the table level The following ideas can help you achieve better performance in similar situations ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 14 Large Table Approach for Exports On the source server you can perform a multithreaded dump of table data using a custom export script or a modern thirdparty export tool such as mydumper described in “ThirdParty Tools and Alternative Solutions ” When using custom scripts you can leverage multithreading by exporting multiple ranges (segments) of rows in parallel For best results you can produce segments by dumping ranges of values in an indexed table column preferably the primary key For performance reasons you should not produce segments using pagination ( LIMIT … OFFSET clause) When using mydumper know that the tool uses multiple threads across multiple tables but it does not parallelize operations against individual tables To use multiple threads per table you must explicitly provide the rows parameter when invoking the mydumper tool as follows rows : Split table into chunks of this many rows default unlimited You can choose the parameter value so that the total size of each chunk doesn’t exceed 100 MB For example if the average row length in the table is 1 KB you can choose a chunk size of 100000 rows for the total chunk size of ~100 MB Large Table Approach for Imports Once the dump is completed you can import it into the target server using custom scripts or the myloader tool Note : Both mydumper and myloader default to using four parallel threads which may not be enough to achieve optimal performance on Aurora dbr32xlarge instances or larger You can change the default level of parallelism using the threads parameter ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 15 Splitting Dump Files into Chunks You can import data from flat files using a single data chunk (for small tables) or a contiguous sequence of data chunks (for larger tables) Use the following guidelines to decide how to split table dumps into multiple chunks:  Avoid generating very small chunks (<1 MB) so that you can avoid protocol and transactional overhead Alternatively very large chunks can put unnecessary pressure on server resources without bringing performance benefits As a rule of thumb you might use a 500MB chunk size for large batch imports  For partitioned InnoDB tables use one chunk per partition and don’t mix data from different partitions in one chunk If individual partitions are very large split partition data further using one of the following solutions  For tables or table partitions with an autoincremented PRIMARY key: o If PRIMARY key values are provided in the dump it is good practice not to split data in a random fashion Instead use rangebased splitting so that each chunk contains monotonically increasing primary key values For example if a table has a PRIMARY key column called id data can be sorted by id in ascending order and then sliced into chunks This approach reduces page fragmentation and lock contention during import o If PRIMARY key values are not provided in the dump the engine generates them automatically for each inserted row In such cases you don't need to chunk the data in any particular way and you can choose the method that’s easiest for you to implement  If the table or table partition has a PRIMARY or NOT NULL UNIQUE key that is not autoincremented split the data so that each chunk contains monotonically increasing key values for that PRIMARY or NOT NULL UNIQUE KEY as described previously ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 16  If the table does not have a PRIMARY or NOT NULL UNIQUE key the engine creates an implicit internal clustered index and fills it with monotonically increasing values regardless of how the input data is split For more information about InnoDB index types see the MySQL Reference Manual 6 Avoiding Secondary Index Maintenance Overhead CREATE TABLE statements found in a typical SQLformat dump include the definition of the table primary key and all secondary keys Consequently the cost of secondary index management has to be paid for every row inserted during the import You can observe the index management cost as a gradual decrease in import performance as the table grows The negative effects of index management overhead are particularly visible if the table is large or if there are multiple secondary indexes defined on it In extreme cases importing data into a table with secondary indexes can be several times slower than importing the same data into a table with no secondary indexes Unfortunately none of the tools mentioned in this document support builtin secondary index optimization You can however implement the optimization using this simple technique:  Modify the dump files so that CREATE TABLE statements do not include secondary key or foreign key definitions  Import data  Recreate secondary and foreign keys using ALTER TABLE statements or third party online schema manipulation tools such as “pt onlineschema change” from Percona Toolkit When using ALTER TABLE: o Avoid using separate ALTER TABLE statements for each index Instead use one ALTER TABLE statement per table to recreate all indexes for that table in a single operation ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 17 o You may run multiple ALTER TABLE statements in parallel (one per table) to reduce the total time required to process all tables ALTER TABLE operations can consume a significant amount of temporary storage space depending on the table size and the number and type of indexes defined on the table Aurora instances use local (perinstance) temporary storage volumes If you observe that ALTER TABLE operations on large tables are failing to complete it can be due to lack of free space on the instan ce’s temporary volume If this occurs you can apply one of the following solutions:  Scale the Aurora instance to a larger type  If altering multiple tables in parallel reduce the number of ALTER statements running concurrently or try running only one ALTER at a time  Consider using a thirdparty online schema manipulation tool such as ptonlineschemachange from Percona Toolkit To learn more about monitoring the local temporary storage on Aurora instances see the Amazon Relational Database Service User Guide 7 Reducing the Impact of LongRunning Data Dumps Data dumps are often performed from active database servers that are part of a missioncritical production environment If severe performance impact of a massive dump is not acceptable in your environment consider one of the following ideas:  If the source server has replicas you can dump data from one of the replicas  If the source server is covered by regular backup procedures: o Use backup data as input for the import process if backup format allows for that ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 18 o If backup format is not suitable for direct importing into the target database use the backup to provision a temporary database and dump data from it  If neither replicas nor backups are available: o Perform dumps during offpeak hours when production traffic is at its lowest o Reduce the concurrency of dump operations so that the server has enough spare capacity to handle production traffic Conclusion This paper discussed important factors affecting the performance of self managed export/import operations in Amazon Relational Database Service (Amazon RDS) for MySQL and Amazon Aurora:  The location and sizing of client and server machines  The ability to consume client and server resources efficiently which is mostly achieved through multithreading  The ability to identify and avoid unnecessary overhead at all stages of the migration process We hope that the ideas and observations we provide will contribute to creating a better overall experience for data migrations in your MySQLcompatible database environments Contributors The following individuals and organizations contributed to this document:  Szymon Komendera Database Engineer Amazon Web Services ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 19 1 http://docsawsamazoncom/AWSEC2/latest/UserGuide/enhanced networkinghtml 2 http://docsawsamazoncom/AWSEC2/latest/UserGuide/EBSPerformanceh tml 3 http://docsawsamazoncom/AmazonRDS/latest/UserGuide/AuroraMigrate MySQLhtml#AuroraMigrateMySQLS3 4 https://devmysqlcom/doc/refman/57/en/mysqlpumphtml 5 https://launchpadnet/mydumper/ 6 https://devmysqlcom/doc/refman/56/en/innodbindextypeshtml 7 http://docsawsamazoncom/AmazonRDS/latest/UserGuide/AuroraMonitor inghtml Notes
General
U.S._Securities_and_Exchange_Commissions_SEC_Office_of_Compliance_Inspections_and_Examinations_OCIE_Cybersecurity_Initiative_Audit_Guide
ArchivedUS Securities and Exchange Commissi on’s (SEC) Office of Compliance Insp ections and Examinations (OCIE) Cybersecurity Initi ative Audit Guide October 2015 This paper has been archived For the latest technical guidance on Security and Compliance refer to https://awsamazoncom/architecture/security identitycompliance/ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 2 of 21 © 2015 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 3 of 21 Contents Executive Summary 4 Approaches for using AWS Audit Guides 4 Examiners 4 AWS Provided Evidence 4 OCIE Cybersecurity Audit Checklist for AWS 6 1 Governance 6 2 Network Configuration and Management 8 3 Asset Configuration and Management 9 4 Logical Access Control 10 5 Data Encryption 12 6 Security Logging and Monitoring 13 7 Security Incident Response 14 8 Disaster Recovery 15 9 Inherited Controls 16 Appendix A: References and Further Reading 18 Appendix B: Glossary of Terms 19 Appendix C: API Calls 20 ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 4 of 21 Executive Summary This AWS US Securities and Exchange Commission’s (SEC) Office of Compliance Inspections and Examinations (OCIE) Cybersecurity Initiative audit guide has been designed by AWS to guide financial institutions which are subject to SEC audits on the use and security architecture of AWS services This document is intended for use by AWS financial institution customers their examiners and audit advisors to understand the scope of the AWS services provide guidance for implementation and discuss examination when using AWS services as part of the financial institutions environment for customer data Approaches for using AWS Audit Guides Examiners When assessing organizations that use AWS services it is critical to understand the “ Shared Responsibility” model between AWS and the customer The audi t guide organizes the requirements into common security program controls and control areas Each control references the applicable audit requirements In general AWS services should be treated similar to onpremise infrastructure services that have been traditionally used by customers for their operating services and applications Policies and processes that apply to devices and servers should also apply when those functions are supplied by AWS services Controls pertaining solely to policy or pr ocedure generally are entirely the responsibility of the customer Similarly management of access to AWS services either via the AWS Console or Command Line API should be treated like other privileged administrator access See the appendix and referenced points for more information AWS Provided Evidence AWS services are regularly assessed against industry standards and requirements In an attempt to support a variety of industries including federal agencies retailers international organizations health care providers and financial institutions AWS elects to have a variety of assessments performed ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 5 of 21 against the services and infrastructure For a complete list and information on assessment performed by third parties please refer to AWS Compliance web site Archived Amazon Web Services – OCIE Cybersecurity Audit Guide September 2015 Page 6 of 21 OCIE Cybersecurity Audit Checklist for AWS The AWS compliance program ensures that AWS services are regularly audited against applicable standards Some control statements may be satisfied by the customer’s use of AWS (for instance Physical access to sensitive data) However most controls have either shared responsibilities between AWS and the customer or are entirely the customer’s responsibility This audit checklist describes the customer responsibilities specific to the OCIE Cybersecurity Initiative when utilizing AWS services 1 Governance Definition: Governance includes the elements required to provide senior management assurance that its direction and intent are reflected in the security posture of the customer This is achieved by utilizing a structured approach to implementing an information security program For the purposes of this audit plan it means understanding which AWS services the customer has purchased what kinds of systems and information the customer plan s to use with the AWS service and what policies procedures and plans apply to these services Major audit focus: Un derstand what AWS services and resources are being used by the customer and ensure that the customer ’s security or risk management program has taken into account the ir use of the public cloud environment Audit approach: As part of this audit determine who within the customer’s organization is an AWS account owner and resource owner and what kinds of AWS services and resources they are using Verify that the customer’s policies plans and procedures include cloud concepts and that cloud is included in t he scope of the customers audit program Governance Checklist Checklist Item Documentation and Inventory Verify that the customer ’s AWS network is fully documented and all AWS critical systems are included in their inventory docume ntation with limited access to this documentation  Review AWS Config for AWS resource inventory and configuration history of resources (Example API Call 1)  Ensure that resources are appropriately tagged with a customer’s application and/or customer data ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 7 of 21 Checklist Item  Review application architecture to identify data flows planned connectivity between application components and resources that contain customer data  Review all connectivity between the custome r’s network and AWS Platform by reviewing the following:  VPN connections where the customers on premise Public IPs are mapped to customer gateways in any VPCs owned by the Customer (Example API Call 2 & 3)  Dire ct Connect Private Connections which may be mapped to 1 or more VPCs owned by the customer (Example API Call 4 ) Risk Assessment Ensure the customer’s risk assessment for AWS services includes potential cybersecurity threats vulnerabilities and business consequences  Verify that AWS services were included in the customer’s risk assessment and privacy impact assessment  Verify that system characterization was documented for AWS services as part of the risk assessment to identify and rank information assets IT Security Program and Policy Verify that the customer includes AWS services in its security policies and procedures including AWS account level best practices as highlighted within the AWS service Trusted Advisor which provides best practice and guidance across 4 topics – Security Cost Performance and Fault Tolerance  Review the customer’s information securit y policies and ensure that it includes AWS servic es and reflects the Identify Theft Red Flag Rules (17 CFR § 248 — Subpart C —Regulation S ID)  Confirm that the customer has assigned an employee (s) as an authority for the use and security of AWS services and there are defined roles for those noted key roles including a Chief Information Security Officer  Note any published cybersecurity risk management process standards the customer has used to model their information security architecture and processes  Ensure the customer maintains documentation to supp ort the audits conducted for their AWS services including its review of AWS third party certifications  Verify that the customer’s internal training records includes AWS security such as Amazon IAM usage Amazon EC2 Security Groups and remote access to Amazon EC2 instances  Confirm that the customer maintains a cybersecurity response policy and training for AWS services  Note any insurance specifically related to the customers use of AWS services and any claims related to losses and expenses attributed to cybersecurity events as a result ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 8 of 21 Checklist Item Service Provider Oversight Verify that the customer’s contract with AWS includes a requirement to implement and maintain privacy and security safeguards for cybersecurity requirements 2 Network Configuration and Management Definition: Network management in AWS is very similar to network management onpremises except that network components such as firewalls and routers are virtual Customers must ensure that their network architecture follows the security requirements of their organization including the use of DMZs to separate public and private (untrusted and trusted) resources the segregation of resources using subnets and routing tables the secure configuration of DNS whether additional transmission protection is needed in the form of a VPN and whether to limit inbound and outbound traffic Customers who must perform monitoring of their network can do so using host based intrusion detection and monitoring systems Major audit focus: Missing or inappropriately configured security controls related to external access/network security that could result in a security exposure Audit approach: Understand the network architecture of the customer’s AWS resources and how the resources are configured to allow external access from the public Internet and the customer ’s private networks Note: AWS Trusted Advisor can be leveraged to validate and verify AWS configurations settings Network Configuration and Management Checklist Checklist Item Network Controls Identify how network seg mentation is applied within the customers AWS environment  Review AWS Security Group implementation AWS Direct Connect and Amazon VPN configuration for proper implementation of network segmentation and ACL and firewall setting s on AWS services (Example API Call 5 8)  Verify that the customer has a procedure for granting remote internet or VPN access to employees for AWS Console access and remote access to Amazon EC2 networks and sy stems ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 9 of 21 Checklist Item  Review the following to ensure the customer maintains an environment for testing and development of software and applications that is separate from its business environment:  VPC isolation is in place between business environment and environments us ed for test and development  VPC peering connectivity is between VPCs This ensure s network isolation is in place between VPCs  Subnet isolation is in place between business environment and environments used for test and development  NACLs are associated with Subnets in which Business and Test/Development environments are located to ensure network isolation is in place subnets  Amazon EC2 instance isolation is in place between the business environment and environments used for test and development  Security Groups associated to 1 or more Instances within the Business Test or Development environments ensure network isolation between Amazon EC2 instances Review the customer’ s DDoS layered defense solution running that operates directly on AWS which are leveraged as part of a DDoS solution such as:  Amazon CloudF ront configuration  Amazon S3 configuration  Amazon Route 53  ELB configuration  The above serv ices do not use Customer owned Public IP addresses and offer DoS AWS inherited DoS mitigation features  Usage of Amazon EC2 for Proxy or WAF Further guidance can be found within the “ AWS Best Practices for DDoS Resiliency Whitepaper ” Malicious Code Controls Assess the implementation and management of anti malware for Amazon EC2 instances in a similar manner as with physical systems 3 Asset Configuration and Management Definition: AWS customers are responsible for maintaining the security of anything they install on or connect to their AWS resources Secure management of the customers ’ AWS resources means knowing what resources the customer is using (asset inventory) securely configuring the guest OS and applications on the customers resources (secure configuration settings patching and antimalware) and controlling changes to the customers resources (change management) ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 10 of 21 Major audit focus: Customers must manage their operating system and application security vulnerabilities to protect the security stability and integrity of the asset Audit approach: Validate the customers OS and applications are designed configured patched and hardened in accordance to the customer’s policies procedures and standards All OS and application management practices can be common between onpremise and AWS systems and services Asset Configuration and Management Checklist Checklist Item Assess configuration management Verify the use of the customer’s configuration management practices for all AWS system components and validate that these standards meet the customer baseline configurations  Review the customer’s procedu re for conducting a specialized wipe procedure prior to deleting the volume for compliance with their established requirements  Review the customers Identity Access Management system which may be used to allow authenticated access to the customer’s applica tions hosted on top of AWS services  Confirm the customer completed penetration testing including the scope for the tests Change Management Controls Ensure the customer’s use of AWS services follows the same change c ontrol processes as internal series  Verify that AWS services are included within the customer’s internal patch management process Review documented process es for c onfiguration and patching of Amazon EC2 instances:  Amazon Machine Images (AMIs) (Example API Call 9 10)  Operating systems  Applications  Review the customer’s API Calls for in scope services for delete calls to ensure the customer has properly disposed of IT assets  4 Logical Access Control Definition: Logical access controls determine not only who or what can have access to a specific system resource but the type of actions that can be performed on the resource (read write etc) As part of controlling access to AWS ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 11 of 21 resources users and processes must present credentials to confirm that they are authorized to perform specific functions or have access to specific resources The credentials required by AWS vary depending on the type of service and the access method and include passwords cryptographic keys and certificates Access to AWS resources can be enabled through the AWS account individual AWS Identify and Access Management (IAM) user accounts created under the AWS account or identity federation with the customer’s corporate directory (single sign on) AWS IAM enables a customer ’s users to securely control access to AWS servi ces and resources Using IAM a customer can create and manage AWS users and groups and use permissions to allow and deny their permissions to AWS resources Major audit focus: This portion of the audit focuses on identifying how users and permissions are set up in AWS for the services being used by the customer It is also important to ensure that the credentials associated with all of the customer’s AWS accounts are being managed securely by the customer Audit approach: Validate that permissions for AWS assets are being managed in accordance with organizational policies procedures and processes Note: AWS Trusted Advisor can be leveraged to validate and verify IAM Users Groups and Role configurations Logical Access Control Checklist Checklist Item Access Management Authentication and Authorization Ensure there are internal policies and procedures for managing access to AWS services and Amazon EC2 instances Ensur e the customer documents their use and configuration of AWS access controls examples and options outlined below :  Description of how Amazon IAM is used for access management  List of controls that Amazon IAM is used to manage – Resource management Securi ty Groups VPN object permissions etc  Use of native AWS access controls or if access is managed through federated authentication which leverages the open standard Security Assertion Markup Language (SAML) 20  List of AWS Accounts Roles Groups and Us ers Policies and policy attachments to users groups and roles (Example API Call 11)  A description of Am azon IAM accounts and roles and monitoring methods  A description and configuration of systems within EC2 ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 12 of 21 Checklist Item Remote Access Ensure there is an approval process logging process or controls to prevent unauthorized remote access Note: All access to AWS and Amazon EC2 instances is “remote access” by definition unless Direct Connect has been co nfigured Review the customer’s process for preventing unauthorized access which may include:  AWS CloudT rail for logging of Service level API calls  AWS CloudW atch logs to meet logging objectives  IAM Policies S3 Bucket Policies Security Groups for con trols to prevent unauthorized access Review the customer’s connectivity between the customer’s network and AWS:  VPN Connection between VPC and Firms network  Direct Connect (cross connect and private interfaces) between customer and AWS  Defined Secu rity Groups Network Access Control Lists and Routing tables in order to control access between AWS and the customer’s network Personnel Control Ensure that the customer restricts users to those AWS services strictly required for thei r business function (Example API Call 12)  Review the type of access control the customer has in place as it relates to AWS services  AWS access control at an AWS level – using IAM with Tagging to control management of Amazon EC2 instances (start/stop/terminate) within networks  Customer Access Control – using the customer IAM (LDAP solution) to manage access to resources which exist in networks at the Operating System / Application layers  Network Access control – using AWS Security Groups(SGs) Network Access Control Lists (NACLs) Routing Tables VPN Connections VPC Peering to control network access to resources within customer owned VPCs 5 Data Encryption Definition: Data stored in AWS is secure by default; only AWS owners have access to the AWS resources they create However some customers who have sensitive data may require additional protection by encrypting the data when it is stored on AWS Only Amazon S3 service currently provides an automated server side encryption function in addition to allowing customers to encrypt on the customer side before the data is stored For other AWS data storage options the customer must perform encryption of the data ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 13 of 21 Major audit focus: Data at rest should be encrypted in the same way as the customer protects onpremise data Also many security policies consider the Internet an insecure communications medium and would require the encryption of data in transit Improper protection of customers ’ data could create a security exposure for the customer Audit approach: Understand where the data resides and validate the methods used to protect the data at rest and in transit (also referred to as “data in flight”) Note: AWS Trusted Advisor can be leveraged to validate and verify permissions and access to data assets Data Encryption Checklist Checklist Item Encryption Controls Ensure there are appropriate controls in place to protect confidential customer information in transport while using AWS services  Review methods for connection to AWS Console management A PI S3 RDS and Amazon EC2 VPN for enforcement of encryption  Review internal policies and procedures for key management including AWS services and Amazon EC2 instances  Review encryption methods used if any to protect customer PINs at Rest – AWS offer s a number of key management services such as KMS AWS CloudHSM and Server Side Encryption for S3 which could be used to assist with data at rest encryption (Example API Call 13 15) 6 Security Logging and Monitoring Definition: Audit logs record a variety of events occurring within a customer ’s information systems and networks Audit logs are used to identify activity that may impact the security of those systems whether in realtime or after the fact so the pro per configuration and protection of the logs is important Major audit focus: Systems must be logged and monitored just as they are for onpremise systems If AWS systems are not included in the overall company security plan critical systems may be omitted from scope for monitoring efforts Audit approach: Validate that audit logging is being performed on the guest OS and critical applications installed on the customers Amazon EC2 instances and that implementation is in alignment with the customer’s policies and procedures especially as it relates to the storage protection and analysis of the logs ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 14 of 21 Security Logging and Monitoring Checklist: Checklist Item Logging Assessment Trails and Monitoring Review logging and monitoring policies and procedures for adequacy retention defined thresholds and secure maintenance specifically for detecting unauthorized activity within AWS services  Review the customer’s logging and monitoring policies and procedures and ensure their inc lusion of AWS services including Amazon EC2 instances for security related events  Verify that logging mechanisms are configured to send logs to a centralized server and ensure that for Amazon EC2 instances the proper type and format of logs are retain ed in a similar manner as with physical systems  For customers usi ng AWS CloudWatch review the customer’s process and record of their use of network monitoring  Ensure the customer utilizes analytics of events to improve their de fensive measures and pol icies  Review AWS IAM Credential report for unau thorized users AWS Config and resource tagging for unauthorized devices (Example API Call 16)  Confirm the customer aggregates and correlates event data from multipl e sources The customer may use AWS services such as: a) VPC Flow logs to identify accepted/rejected network packets entering VPC b) AWS CloudT rail to identify authenticated and unauthenticated API calls to AWS services c) ELB Logging – Load balancer logging d) AWS CloudF ront Logging – Logging of CDN distributions Intrusion Detection and Response Review host based IDS on Amazon EC2 instances in a similar manner as with physical systems  Review AWS provided evidence on where information on intru sion detection processes can be reviewed 7 Security Incident Response Definition: Under a Shared Responsibility Model security events may be monitored by the interaction of both AWS and AWS customers AWS detects and responds to events impacting the hypervisor and the underlying infrastructure Customers manage events from the guest operating system up through the application The customer should understand incident response responsibilities and adapt existing security monitoring/alerting/audit tools and processes for their AWS resources ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 15 of 21 Major audit focus: Security events should be monitored regardless of where the assets reside The auditor can assess consistency of deploying incident management controls across all environments and validate full coverage through testing Audit approach: Assess existence and operational effectiveness of the incident management controls for systems in the AWS environment Security Incident Response Checklist: Checklist Item Incident Reporti ng Ensure that the customer’s incident response plan and policy for cybersecurity incidents includes AWS services and addresses controls that mitigate cybersecurity incidents and recovery  Ensure the customer is leveraging existing incident monitoring to ols as well as AWS available tools to monitor the use of AWS services  Verify that the Incident Response Plan undergoes a periodic review and that changes related to AWS are made as needed  Note if the Incident Response Plan has customer notification pro cedures and how the customer addresses responsibility for losses associated with attacks or instructions impacting customers 8 Disaster Recovery Definition: AWS provides a highly available infrastructure that allows customers to architect resilient applications and quickly respond to major incidents or disaster scenarios However customers must ensure that they configure systems that require high availability or quick recovery times to take advantage of the multiple Regions and Availability Zones that AWS offers Major audit focus: An unidentified single point of failure and/or inadequate planning to address disaster recovery scenarios could result in a significant impact to the customer While AWS provides service level agreements (SLAs) at the individual instance/service level these should not be confused with a customer’s business continuity (BC) and disaster recovery (DR) objectives such as Recovery Time Objective (RTO) Recovery Point Objective (RPO) The BC/DR parameters are associated with solution design A more resilient design would often utilize multiple components in different AWS availability zones and involve data replication ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 16 of 21 Audit approach: Understand the DR strategy for the customer’s environment and determine the faulttolerant architecture employed for the customer ’s critical assets Note: AWS Trusted Advisor can be leveraged to validate and verify some aspects of the customer’s resiliency capabilities Disaster Recovery Checklist : Checklist Item Business Continuity Plan (BCP) Ensure there is a comprehensive BCP for A WS services utilized that addresses mitigation of the effects of a cybersecurity incident and/or recover y from such an incident  Within the Plan ensure that AWS is included in the customer’s emergency preparedness and crisis management elements senior m anager oversight responsibilities and the testing plan Backup and Storage Controls Review the customer’s periodic test of their backup system for AWS services (Example API Call 17 18)  Review i nventory of data backed up to AWS services as off site backup 9 Inherited Controls Definition: Amazon has many years of experience in designing constructing and operating largescale datacenters This experience has been applied to the AWS platform and infrastructure AWS datacenters are housed in nondescript facilities Physical access is strictly controlled both at the perimeter and at building ingress points by professional security staff utilizing video surveillance intrusion detection systems and other electronic means Authorized staff must pass twofactor authentication a minimum of two times to access datacenter floors All visitors and contractors are required to present identification and are signed in and continually escorted by authorized staff AWS only provides datacenter access and information to employees and contractors who have a legitimate business need for such privileges When an employee no longer has a business need for these privileges his or her access is immediately revoked even if they continue to be an employee of Amazon or Amazon Web Services All physical access to datacenters by AWS employees is logged and audited routinely Major audit focus: The purpose of this audit section is to demonstrate that the customer conducted the appropriate due diligence in selecting service providers ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 17 of 21 Audit approach: Understand how the customer can request and evaluate thirdparty attestations and certifications in order to gain reasonable assurance of the design and operating effectiveness of control objectives and controls Inherited Controls Checklist Checklist Item Physical Security & Environmental Controls Review the AWS provided evidence for details on where information on intrusion detection processes can b e reviewed that are managed by AWS for physical security controls ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 18 of 21 Appendix A: References and Further Reading 1 Amazon Web Services: Introduction to AWS Security https://d0awsstaticcom/whitepapers/Security/Intro_to_AWS_Security pdf 2 Amazon Web Services Risk and Compliance Whitepaper – https://d0awsstaticcom/whitepapers/compliance/AWS_Risk_and_Com pliance_Whitepaperpdf 3 Using Amazon Web Services for Disaster Recovery http://d36cz9buwru1ttcloudfrontnet/AWS_Disaster_Recoverypdf 4 Identity federation sample application for an Active Directory use case http://awsamazoncom/code/1288653099190193 5 Single Signon with Windows ADFS to Amazon EC2 NET Applications http://awsamazoncom/articles/3698?_encoding=UTF8&queryArg=sear chQuery&x=20&y=25&fromSearch=1&searchPath=all&searchQuery=iden tity%20federation 6 Authenticating Users of AWS Mobile Applications with a Token Vending Machine http://awsamazoncom/articles/4611615499399490?_encoding=UTF8& queryArg=searchQuery&fromSearch=1&searchQuery=Token%20Vending %20machine 7 ClientSide Data Encryption with the AWS SDK for Java and Amazon S3 http://awsamazoncom/articles/2850096021478074 8 AWS Command Line Interface – http://docsawsamazoncom/cli/latest/userguide/clichapwelcomehtml 9 Amazon Web Services Acceptable Use Policy http://awsamazoncom/aup/ ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 19 of 21 Appendix B: Glossary of Terms API: Application Programming Interface (API) in the context of AWS These customer access points are called API endpoints and they allow secure HTTP access (HTTPS) which allows you to establish a secure communication session with your storage or compute instances within AWS AWS provides SDKs and CLI reference which allows customers to programmatically manage AWS services via API Authentication: Authentication is the process of determining whether someone or something is in fact who or what it is declared to be Availability Zone: Amazon EC2 locations are composed of regions and Availability Zones Availability Zones are distinct locations that are engineered to be insulated from failures in other Availability Zones and provide inexpensive low latency network connectivity to other Availability Zones in the same region EC2: Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud It is designed to make webscale cloud computing easier for developers Hypervisor: A hypervisor also called Virtual Machine Monitor (VMM) is software/hardware platform virtualization software that allows multiple operating systems to run on a host computer concurrently IAM: AWS Identity and Access Management (IAM) enables a customer to create multiple Users and manage the permissions for each of these Users within their AWS Account Object: The fundamental entities stored in Amazon S3 Objects consist of object data and metadata The data portion is opaque to Amazon S3 The metadata is a set of name value pairs that describe the object These include some default metadata such as the date last modified and standard HTTP metadata such as ContentType The developer can also specify custom metadata at the time the Object is stored Service: Software or computing ability provided across a network (eg EC2 S3 VPC etc) ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 20 of 21 Appendix C: API Calls The AWS Command Line Interface is a unified tool to manage your AWS services Read more: http://docsawsamazoncom/cli/latest/reference/indexhtml#cliaws and http://docsawsamazoncom/cli/latest/userguide/clichapwelcomehtml 1 List all resources with tags aws ec2 describetags http://docsawsamazoncom/cli/latest/reference/ec2/describetagshtml 2 List all Customer Gateways on the customers AWS account: aws ec2 describecustomergateways –output table 3 List all VPN connections on the customers AWS account aws ec2 describevpnconnections 4 List all Customer Direct Connect connections aws directconnect describeconnections aws directconnect describeinterconnects aws directconnect describeconnections oninterconnect aws directconnect describevirtualinterfaces 5 List all Customer Gateways on the customers AWS account: aws ec2 describecustomergateways –output table 6 List all VPN connections on the customers AWS account aws ec2 describevpnconnections 7 List all Customer Direct Connect connections aws directconnect describeconnections aws directconnect describeinterconnects aws directconnect describeconnections oninterconnect aws directconnect describevirtualinterfaces 8 Alternatively use Security Group focused CLI: aws ec2 describesecuritygroups 9 List AMI currently owned/registered by the customer aws ec2 describeimages –owners self 10 List all Instances launched with a specific AMI aws ec2describeinstances filters “Name=image idValues=XXXXX” (where XXXX = imageid value eg ami12345a12 ArchivedAmazon Web Services – OCIE Cybersecurity Audit Guide October 2015 Page 21 of 21 11 List IAM Roles/Groups/Users aws iam listroles aws iam listgroups aws iam listusers 12 List Policies assigned to Groups/Roles/Users: aws iam listattachedrolepolicies rolename XXXX aws iam listattachedgrouppolicies groupname XXXX aws iam listattacheduserpolicies username XXXX where XXXX is a resource name within the Customers AWS Account 13 List KMS Keys aws kms listaliases 14 List Key Rotation Policy aws kms getkeyrotationstatus –keyid XXX (where XXX = keyid In AWS account 15 List EBS Volumes encrypted with KMS Keys aws ec2 describevolumes "Name=encryptedValues=true" targeted eg useast 1) 16 Credential Report aws iam generatecredentialreport aws iam getcredentialreport 17 Create Snapshot/Backup of EBS volume aws ec2 createsnapshot volumeid XXXXXXX (where XXXXXX = ID of volume within the AWS Account) 18 Confirm Snapshot/Backup completed aws ec2 describesnapshots filters “Name=volume idValues=XXXXXX)
General
AWS_Cloud_Transformation_Maturity_Model
ArchivedAWS Cloud Transformation Maturity Model September 2017 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchived © 201 7 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or l icensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Contents Introduction 1 Project Stage 3 Challenges and Barriers 4 Transformation Activities 5 Outcomes and Maturity 7 Foundation Stage 8 Challenges and Barriers 8 Transformation Activities 9 Outcomes and Maturity 10 Migration Stage 11 Challenges and Barriers 11 Transformation Activities 12 Outcomes and Maturity 14 Optimization Stage 15 Challenges and Barriers 15 Transformation Activities 16 Outcomes and Maturity 17 Conclusion 18 Contributors 18 Document Revisions 19 Archived Abstract The AWS Cloud Transformation Maturity Model (CTMM) maps the maturity of an IT organization’s process people and technology capabilities as they move through the four stages of the journey to the AWS Cloud : project foundation migration and optimization The objective of the CTMM is to help enterprise IT organizations understand the significant challenges they might face as they adopt AWS learn best practice s and activities to handle those challenges and recognize the signs of maturity or expected outcomes to gauge their maturity and readiness at every stage This whitepaper guide s organizations to measur e their readiness for the AWS Cloud build an effective cloud transformation strategy and drive a n effective execution plan ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 1 Introduction The Amazon Web Services ( AWS) Cloud Transformation Maturity Model (CTMM) is a tool enterprise customers can use to assess the maturity of their cloud adoption through four key stages : project foundation migration and optimization Each stage brings an organization ’s people processes and technologies closer to realizing its vision of ITasaService (ITaaS) To fully benefit from the AWS C loud the whole organization has to transform and adopt the cloud —not just the IT division Figure 1 shows the key AWS CTMM activities and when they occur during the four stages of cloud transformation Figure 1 : AWS Cloud T ransform ation Maturi ty Model – stages milestones and timeline The four stages of cloud transformation are described in detail in this paper Table 1 provides a mat urity matrix of the challenges key transformation activities and outcomes at each stage of the AWS CTMM ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 2 Table 1: AWS Cloud T ransformation Maturity Matrix Maturity Stage Customer Challenges Transformation Activities Outcomes/Milestones of Maturity Project Limited knowledge of AWS services Raise level of AWS awareness via education and training Organization knowledge and support Limited executive support for new IT investment Seek case studies of proven return on investment ( ROI) and participate in AWS executive briefings Executive support and appropriate funding Unable to purchase required services Use current services or create new contract Educate procurement and legal staff about new purchasing paradigms when procur ing cloud services and tools1 Ability to purchase all required services Limited confidence in cloud service capabilities Execute one or more pilot/POC project s Increased confidence and fewer concerns No clear ownership or direction Conduct a Kickoff and Discovery Workshop IT ownership with clear strategy and direction Foundation Assigning the required resources to effectively drive the transformation Conduct a People Model Workshop and establish a CCoE Dedicated resources to define policies architecture Lack of a detailed organizational transformation plan Conduct a Governance Model Workshop and a Migration Jumpstart Detailed plan for all aspects of the transformation (People Process and Technology ) Limited knowledge of security and compliance paradigms and requirements in the cloud Conduct an AWS Security Risk and Compliance Workshop Best practice security policies architecture and procedures Cost and budget management requirements and concerns Conduct an AWS Cost Model Workshop Detailed TCO for proposed operating environment Migration Developing an effective and efficient migration strategy Conduct an Application Portfolio Assessment Jumpstart A migration strategy with a clear line of sight from current to target state environment ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 3 Maturity Stage Customer Challenges Transformation Activities Outcomes/Milestones of Maturity Implementing an effective and efficient migration process Select and implement best migration environment A cost efficient and effective application migration process Managing environment efficiently and effectively Select and implement best management environment A cost efficient and effective portfolio management with robust governance and security Migrating all targeted applications ( AllIn ) successfully Migrate workloads using AWS/Partner implementation tools and services Allin – organization achieving significant benefits Optimization Optimizing cost management Leverage AWS tools and features to continuous ly improv e operational costs (eg consol idated billing Reserved Instances discounts ) Focused and robust processes in place to continuous ly seek ways to optimize costs Optimizing service management Utilize latest AWS tools to continuously improve service management methods/processes Fully optimized service management and increased customer satisfaction Optimizing application management services Utilize AWS best practices and tools (eg DevOps CI/CD) to continuously improve application management methods/tools Rigorous emphasis on optimized application management services Optimizing enterprise services Continuously seek ways to aggregate and improve shared services Optimized enterprise services and customer satisfaction Project Stage The project stage begins the transformation journey for your organization Organizations in this stage usually have limited knowledge of c loud services and their potential costs and benefits and typically they don’t have a centralized cloud adoption strategy Getting through this initial stage is crucial to the ultimate success for your organization’s journey to the cloud T he outcomes realized and lessons learned ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 4 here lay the strong foundation for broader cloud adoption at all organizational levels Challenges and Barriers Your organization needs to overcome t he following key challenges and barriers during this stage of the transformation : • Limited knowledge and training – IT s taff and their internal customers are accustomed to the older model and related process of acquiring and consuming IT Significant investment in training is required for IT staff and other business units to adopt the cloud model • Executive support and fund ing – IT leaders have traditionally framed IT infrastructure investments as a necessary evil to gain funding approval for signi ficant infrastructure upgrades As a result e xecutives are often skeptical and resistant to any new funding In addition executives constantly hear complaints from IT customers ( that is the other business units ) about rising costs poor service delivery and fail ed or failing project implementations • Purchasing public cloud services – IT leaders face the challenge of establishing new contracts or leveraging existing contracts with specific terms and conditions to purchase cloud services A significant obstacle can be the lack of awareness among the procurement and legal staff about purchasing paradigms for cloud services In addition IT leaders have to ensure that new contracts meet the competitive bidding laws of their jurisdiction which can be a long and complex process • Limited confidence in cloud service models – Cloud service infrastructure provisioning and management operation models are significantly different from the traditional on premise s operating model Your IT group might require hands on experience before it is ready to support the transformation effort If your IT group resists change or isn’t enthusiastic about changing to the cloud model your transformation initiative could be significantly undermine d • IT ownership and direction – IT leaders have many leadership challenges including shadow IT where other business units set up their own IT operations IT leaders have to gain control of central IT ownership ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 5 and communicat e a clear transformation roadmap to all organization stakeholders Transformation Activities To overcome the challenges and barriers in the project stage and mature to the foundation stage your organization must complete the following transformation activities : • Contact an AWS account manager – An AWS a ccount manager is a key resource and a single point of contact who can connect you with AWS Partners and professional services to address all of your AWS needs To get in touch with an AWS account manager go to Contact Us 2 • Raise the level of AWS awareness – There are many AWS events3 and education and training resources for your organization’s stakeholders including: o AWS Business Essentials – This training helps your IT business leaders and professionals understand the benefits of cloud computing from the strategic business value perspective For more information see the AWS Business Essentials website4 o Online videos and hands on labs – AWS offers a series of free ondemand instructional videos and labs to help you learn about AWS in minutes5 In addition qwikL ABS provide hands on practice with popular AWS Cloud services and real world scenarios 6 To learn more about AWS services and features from AWS engineers and solution s architects and to hear customer perspectives visit the AWS YouTube Channel 7 o AWS Technical Essentials – This training provides an overview of AWS services and solutions to your technical users to give them the information they need to make informed decisions about the IT solutions for your organization For more information see the AWS Technical Essentials website8 o AWS whitepapers – The comprehensive online collection of AWS Whitepapers cover s a broad range of technical topics including best practices for solving business problems architectures security compliance and cloud economics9 ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 6 o AWS trainings – AWS offers an array of instructor led technical trainings to help your teams develop the skills to design deploy and operate infrastructure and applications i n the AWS C loud Please visit AWS Training and Certification for more information10 Table 2 : AWS rec ommended educational resources for roles in your organization Role Resources IT leadership team AWS Business Essentials Online Videos and Labs AWS Whitepapers IT staff AWS Business Essentials Online Videos and Labs AWS T echnical Essentials AWS W hitepapers AWS Training and Certification IT customers AWS Business Essentials Online Videos and Labs AWS Whit epapers • Secure executive support and funding – AWS offers cost and value modeling workshops to provide you with estimated costs and strategic value so you can perform a costbenefit analysis as a basis for securing executive support and funding In addition numerous case studies 11 and whitepapers demonstrate proven cost savings and agility benefits for customers of all sizes in virtually every market segment • Consider purchasing o ptions – You can buy AWS Cloud services12 the following ways: o Direct purchase from AWS – Start using AWS services within minutes by opening an account online in accordance with the AWS Terms and Conditions o Indirect p urchase from an AWS Partner – Acquire AWS via Partner contract vehicles to serve the needs of federal state and l ocal governments as well as the education sector F or more information see the AWS whitepaper Ten Considerations for a Cloud Procurement13 the contracts web page AWS Public Sector Contract Center14 or send an email to aws wwps contract mgmt@amazoncom ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 7 • Execute a pilot or proof ofconcept (POC) project – Most customers leverage one or more pilot or POC projects to test AWS implementation on representative workloads AWS supports such initiatives by providing accelerator service s such as an AWS Migration Jumpstart to provide the end toend knowledge transfer of an actual workload migration In addition for customers working with an AWS Partner the AWS POC Program is another avenue to get funding for POC projects executed via eligible AWS Partners F or more information see the Partner Funding webpage 15 • Conduct an IT Transformation Workshop – This workshop enable s rapid cloud adoption by showing you how to replace uncertainty with a vision and strategy on how to derive value from AWS The workshop is an interactive educational experience where you can clearly identify business drivers objectives and blockers This helps you build a cloud adoption roadmap to guide you through the next steps in your journey to the cloud Outcomes and Maturity Use t he following key outcomes to measure your organization’s maturity and readiness to proceed to the foundation stage : • Effective use of AWS resources – The AWS account manager works with your organization to coordinate the appropriate AWS professional services onsite presentations and meetings onsite training web service accounts and support • Knowledgeable and trained o rganization – Your IT leadership team is familiar with AWS its costs and benefits and transformation best practices Key IT staff members have some hands on experience with AWS services and IT customers have basic knowledge of AWS features and capabilities • Executive support and funding – Your IT leadership team has presented a sound business case for funding the cloud transformation initiative to your organization’s executive leadership This business case typically includes a cost benefit analysis customer reference examples and risk management assessments ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 8 • Ability to p urchase AWS and AWS profe ssional services – Your IT team has work ed with the AWS account manager to identify an existing contract vehicle via an AWS Partner or to put a new contract in place 16 • IT staff confidence and true buyin – The POC was executed successfully and addressed the concerns of your key IT staff who se complete support is crucial to effectively transform the organization • Central IT ownership and a clear transformation roadmap – Centralized ownership of the cloud initiative has emerged and all of your stakeholders participated in an IT Transformation Workshop The IT leaders have a clear vision and a transformation roadmap has been communicated to key stakeholders across the organization The roadmap provides direction on establishing preliminary AWS governance policies that mitigate the risks of business units moving ahead Foundation Stage The foundation stage is characterized by the customer’s intent to move forward with migration to AWS with executive spo nsorship some experience with AWS services and partially trained staff During this stage the customer’s environment is assessed all contractual agreements are in place and a plan is created for the migration The migration plan details the business case in scope workloads approach to migration resources required and the timeframe Challenges and Barriers Your organization must overcome t he following key challenges and barriers during this stage: • Assigning transformation support resources – Effective execution in this stage requires a significant amount of time from key IT staff who are knowledgeable and trusted to provide input into decisions concerning architecture security and governance This can be challenging because IT organizations are constantly inundated with competing priori ties related to managing the current environment This situation is further compounded by the limited number of key infrastructure security and service management staff • Providing leadership through a transformation plan – IT leaders are challenged with the daunting task of developing a transformation plan ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 9 that addresses all aspects of organization al change including business governance architecture service delivery operations roles and responsibili ties and training • Integrating s ecurity and compliance p olicies – IT organizations are challenged with integrating AWS into their existing security and control framework that supports their current IT environment They are also challenged with configurin g AWS to be in compliance with regulatory requirements • Managing c ost and budget – IT organizations are challenged to develop a budget aligned with the OpEx model of utility computing measurable benefit goals and an effective cost management process Transformation Activities We recommend t he following transformation activities to achieve the necessary outcomes before moving to the migration stage: • Establish a Cloud Center of Excellence (CCo E) – AWS recommends strong governance practices using a CCoE We recommend that you staff the CCoE gradually with a dedicated team that has the following core responsibilities: o Defining central policies and strategy o Providing support and knowledge transfer to business units using hybrid cloud solutions o Creating and provisioning AWS accounts for workload/program owners o Providing a central point of access control and security standards o Creating and managing common use case architectures (blueprints) The use of a CCoE lowers the implementation and migration risk across the organization and serves as a conduit for sharing the best practices for a broader impact of cloud transformation throughout the organization • Develop security and compliance architecture – AWS Prof essional Services helps your organization achieve risk management and compliance goals Prescriptive guidance enables you to adopt rigorous ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 10 methods for implementing security and compliance processes for systems and personnel • Develop a value management plan – Developing a robust value management model is a key activity that includes tactical benefits ( cost management prioritization of IT spending and a system of allocating costs ) and strategic value from the cloud (agility time to market ITaaS innova tion) When you have a plan you can focus on and prioritize initiatives (see Figure 2) For example with AWS you can view specific IT operating costs and system performance data AWS also enables allocati on to specific business groups or specific applicat ions in near real time Figure 2 : Strategic and t actical values of AWS adoption identified Outcomes and Maturity Use t he following key outcomes to measure your organization ’s readiness to move to the migration stage : • CCoE for Cloud Governance – The central CCOE provides the following benefits: o Standardization of s trategy and v ision – Centralization allows a single point of cloud strategy that is aligned with the larger business requirements of the wider organization o Centralized expertise – A central cloud team can be trained quickly in specialized cloud technologies while individual business areas are still getting up to speed ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 11 o Standardization of t echnical processes and procedures – A central team owns the responsibility for standard processe s procedures and blueprints which can include the use of automation and other methods to simplify and standardize deployments by application owners o Bias for a ction – A central cloud team has a vested interest in making sure that the cloud computing model is successful whereas decentralized business units might be less effective if they don’t realize a direct benefit • Clear transformation roadmap – A transformation roadmap establishes a plan identifies resourc es and provides details about migration activities The roadmap is used to define the ordering and dependencies of your initiatives to achieve t he goals set by the CCo E steering c ommittee or program management • Best practice security and compliance architecture – A highly scalable best practice architecture design is created that supports all policy and regulatory compliance requirements • Strong value management plan – A value management plan determines and describe s how you quantify value and identifies the areas where the project team s should focus Migration Stage The migration stage is where your organization matures overall with governance technical and operational foundation in place to effectively and efficiently migrate targeted application s Dur ing this stage the building blocks of the migration and operational tools are implemented and the mass migration of inscope workloads is completed Significant risks exist at this stage such as project delays budget overruns and application failures If the appropr iate migration strategies tools and methods are not implemented there is also a risk that customer confidence and support will diminish Challenges and Barriers Your organization must overcome t he following key challenges and barriers during this stage: ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 12 • Developing an e ffective and efficient m igration strategy – Your organization is challenged to implement a strategy that mini mizes the risk of project failures and maximizes ROI Many ambitious IT projects fail because they are based on inappropriate strategies and plans It’s critical to classify sequenc e and have an appropriate migration disposition for your targeted application workloads to ensure the success of the overall implementation pl an • Implementing a robust migration process – Your organization is challenged to implement a migration execution process that minimizes cost and is repeatable and sustainable The selection and implementation of proven migration tools and methods is a ke y factor in your organization ’s ability to minimize the risks associated with migrating targeted application workloads • Setting up a c loud environment – Your organization is challenged to implement a cloud environment that is controlled sustainable reliable and enables improved agility This challenge includes leveraging existing tools and processes as well as developing new tools and processes • Going allin – Your organization is challenged to implement process es that enable the effe ctive and efficient migration of all application workloads onto AWS on time and within budget Like all projects the risk is that technical failures unsustainable processes and performance failures could create significant project delays and unplanned costs Transformation Activities We recommend t he following transformation activities to achieve the outcomes in this stage and mature to the optimization stage : • Conduct a portfolio assessment – Your organization must go through a portfolio rationalization exercise to determine which applications to migrate r eplace or in some cases eliminate Figure 3 illustrates decision points to consider in determining the strategy for moving each application to the AWS Cloud focusing on the 6 Rs : retire retain rehost replatform repurchase and refactor ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 13 Figure 3 : Application migration dispositions and paths identified from migration strategy Table 3 describes the transformation impact of the 6 Rs in the order of their execution complexity Table 3: Cloud m igration strategies and corresponding levels of complexity for execution Migration Pattern Transformation Impact Complexity Refactoring Rearchitecting and recoding require investment in new capabilities delivery of complex programs and projects and potentially significant business disruption Optimization for the cloud should be realized High Replatforming Amortization of transformation costs is maximized over larger migrations Opportunities to address significant infrastructure upgrades can be realized This has a positive impact on compliance regulatory and obsolescence drivers Opportunities to optimize in the cloud should be realized High Repurchasing A replacement through either procurement or upgrade Disposal commissioning and decommissioning costs may be significant Medium Rehosting Typically referred to as lift and shift or forklifting Automated and scripted migrations are highly effective Medium Retiring Decommission and archive data as necessary Low Retaining This is the do nothing option Legacy costs remain and obsolescence costs typically increase over time Low ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 14 • Implement a m igration environment – In addition to the migration strategy your organization must develop a migration process for each application workload These processes include application migration tools data migration tools validation methods and roles and responsibilities In addition to other criteria such as business criticality and architecture each application is classified by migration method and process For example Figure 3 shows how you can migrate applications using AWS VM Import /Export or third party migration tools or by manually moving the code and data • Implement a best management environment – Your organization must develop and implement an effective cloud governance and operating model that addresses your organization’s nee d from the standpoint of access security compliance and automation • Migrate targeted workloads – AWS recommends using the principles of agile methodology to effectively execute and manage the migration of workloads from end to end This requires that y our organization plan schedule and execute migrations in repeatable sprints incorporating lessons learned after every sprint Each migration sprint should go through an appropriate acceptance test and change control process Outcomes and Maturity Use t he following key outcomes to measure your organization’s maturity in this stage and assess the organization’s readiness to progress to the optimization stage : • Allin with AWS – This means that the organization has declared that AWS is its primary cloud host for both legacy and new applications T his is a strategic long term direction from executive leadership to stop managing data centers and migrat e all targeted application workloads to AWS • IT as a Service (ITaaS) – Your organization is realizing the core benefits of cloud adoption : measurable cost savings agility and innovation Your organization is now effectively prov iding IaaS based services as a part of an ITaaS delivery organization ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 15 Optimization Stage The optimization stage is the fourth stage in the transformation maturity model To reach this stage your organization has successfully migrated all targeted application workloads ( that is it is allin on AWS) and is efficiently managing the AWS environment and service delivery process Thi s phase is an ongoing loop not a destination The objective of this phase is to optimize existing process es by lowering costs improving service and extending AWS value deeper into your organization The focus on continuous service improvement enables you to realize the true value of utility computing where you constantly seek optimiz ation and addition of newer AWS services to drive cost and performance efficiencies Challenges and Barriers Your organization must overcome t he following key challenges and barriers during this phase of the transformation journey: • Optimize costs – Reducing and optimizing costs are not new challenges to the IT world With AWS your organization can finally realize those benefits AWS and third party providers frequently re lease new features and services including various discounting/consumption based models that you can evaluate for efficacy within your organization For example by evaluating application and database licensing fees that are often overlooked your organization can realize significant costreduction opportunities available with a cloud based payasyougo model • Optimize operation services – Your organization will be challenged to continuously improve the service delivery model for provisioning change control and managing the environment AWS and third party providers frequently release new features (eg automation templates) and services that you can investigate to improve automation and repeatability of tasks • Optimize application services – Your organization will be challenged to continuously improve application services that you use to build and enhance applications AWS and third party providers frequently release new features and services that your organization can evaluate to further optimiz e application services ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 16 • Optimize enterprise services – O rganization s are constantly challenged to seek Software asaService ( SaaS )based offerings as opposed to hosted solutions to continuously improve enterprise application services AWS and third party providers innovat e at a rapid pace adding services and features (eg managed databases virtual desktop email and document management) that can simplify your enterprise services Transformation Activities Your organization should complete t he following transformation activities to achieve the outcomes that your organization needs to continuously maximize maturity and value: • Implement a continuous cost optimization process – Either the designated resources on a CCo E or a group of centralized staff from IT Finance must be trained to support an ongoing process using AWS or third party cost management tools to assess costs and optimize savings • Implement a continuous operation management optimization process – Your organization should evaluate ongoing advancements in AWS services as well as thirdparty tools to pursue continuous improvement to operation management and service delivery process es • Implement a continuous applicati on service optimization process – Your organization should evaluate ongoing advancements in AWS services and features including thirdparty offerings to seek continuous improvement to the application service process Your organization might not use the AWS fully managed a pplication service solutions to migrat e existing application s but these services provide significant value in new application development AWS a pplication service offerings include the following : o Amazon API Gateway – A fully managed ser vice that makes it easy for developers to create publish maintain monitor and secure APIs at any scale o Amazon AppStream 20 – E nables you to stream your existing Windows applications from the cloud reaching more users on more devices without code modifications ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 17 o Amazon Elasticsearch Service (Amazon ES) – This fully managed service makes it easy to deploy operate and scale Amazon ES for log analytics full text search application monitoring and more o Amazon Elastic Transcoder – M edia transcoding in the cloud This service is designed to be a highly scalable easy touse and cost effective way for developers and businesses to convert (that is transcode) media files from their source format into formats required by consumer playback devices such as smartphones tablets and PCs • Implement a continuous enterprise s ervice optimization process – AWS continually innovat es and launch es additional enterprise applications that your organization should consider implementing to achieve ease ofuse and enterprise grade security without the burden of managing maintenance overhead For example AWS enterprise services applications include: o Amazon WorkSpaces – A managed desktop cloud computing service o Amazon WorkDocs – A fully managed secure enterprise s torage and sharing service with strong administrative controls and feedback capabilities that improve user productivity o Amazon WorkMail – A secure managed business email and calendar service with support for existing desktop and mobile email clients Outcomes and Maturity Use t he following transformation outcomes to measure your organization’s maturity as optimized and continuously maximizing maturity and value: • Optimized cost savings – Your organization has an ongoing process and a team focused on continually review ing AWS usage across your organization and identify ing cost reduction opportunities • Optimized operations management process – Your organization has an ongoing process in place to routinely review AWS and third party management tools to identify ways to improve the efficiency and effectiveness of the current operation management process ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 18 • Optimized application development process – Your organization has an ongoing process in place to evaluate AWS and third party management tools to identify ways to improve the efficiency and effectiveness of the application architecture and development process • Optimized enterprise services – Your organization has an ongoing process in place to regularly review AWS and third party management enterprise s ervice offerings to improve the delivery security and management of services offered throughout the organization Conclusion Every customer’s cloud journey is unique However the challenges corresponding actions and outcomes achieved are similar The AWS Cloud Transformation Maturity Model provide s you with a way to identify and anticipate the challenges early become familiar with the mitigation strategies based on AWS best practices and guidance and successfully drive value from cloud transforma tion AWS and its thousands of partners have leveraged this model to accelerate customer adoption of AWS Cloud services by compressing the time through each stage of their cloud transformation Even in situations where customers pursue certain activities in parallel across multiple stages or are at varying levels of maturity in different parts of the organization due to their size and IT organizational structure the guidance provided in th is paper can help you significantly reduce the risk and uncertainty in your organization ’s cloud transformation initiative Contributors The following individuals and organizations contributed to this document: • Blake Chism Global Practice Development AWS Public Sector • Sanjay Asnani Partner Strategy Consultant AWS Public Sector • Brian Anderson Practice Manager SLG AWS Public Sector ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 19 Document Revisions Date Description September 2017 Updated content September 2016 First publication 1 https://d0awsstaticcom/whitepapers/10 considerations fora cloud procurementpdf 2 https://awsamazoncom/contact us/ 3 https://awsamazoncom/about aws/events/ 4 https://awsamazoncom/training/course descriptions/business essentials/ 5 https://awsamazoncom/training/intro_series/ 6 https://qwiklabscom/ 7 https://wwwyoutubecom/user/AmazonWebServices 8 https://awsamazoncom/training/course descriptions/essentials/ 9 https://awsamazoncom/whitepapers/ 10 https://awsamazoncom/training/ 11 https://awsamazoncom/solutions/case studies/ 12 https://awsamazoncom/how tobuy/ 13 https://d0awsstaticcom/whitepapers/10 considerations fora cloud procurementpdf 14 https://awsamazoncom/contract center/ 15 https://awsamazoncom/partners/fundingbenefits/ 16 https://awsamazoncom/contract center/ Notes
General
Automating_Elasticity
ArchivedAutomating Elasticity March 2018 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: awsamazoncom/whitepapersArchived Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2020 Amazon Web Services Inc or its affiliates All right s reserved Archived Contents Introduction 1 Monitoring AWS Service Usage and Costs 1 Tagging Resources 2 Automating Elasticity 2 Automating Time Based Elasticity 3 Automating Volume Based Elasticity 4 Conclusion 6 Archived Abstract This is the sixth in a series of whitepapers designed to support your cloud journey This paper seeks to empower you to maximize value from your investments improve forecasting accuracy and cost predictability create a culture of ownership and cost transparency and continuously me asure your optimization status This paper discusses how you can automate elasticity to get the most value out of your AWS resources and optimize costs ArchivedAmazon Web Services – Automating Elasticity Page 1 Introduction In the traditional data center based model of IT once infrastructure is deployed it typically runs whether it is needed or not and all the capacity is paid for regardless of how much it gets used In the cloud resources are elastic meaning they can instantly grow or shrink to match the requirements of a specific application Elasticity allows you to match the supply of resource s—which cost money —to demand Because cloud resources are paid for based on usage matching needs to utilization is critical for cost optimization Demand includes both external usage such as the number of customers who visit a website over a given period and internal usage such as an application team using dev elopment and test environments There are two basic types of elasticity: time based and volume based Time based elasticity means turning off resources when they are not being used such as a devel opment environment that is needed only during business hours Volume based elasticity means matching scale to the intensity of demand whether that’s compute cores storage sizes or throughput By combining monitoring tagging and automation you can get the most value out of your AWS resources and optimize costs Monitoring AWS Service Usage and Costs There are a couple of tools that you can use to monitor your service usage and costs to identify opportunities to use elasticity The Cost Optimization Monitor can help you generate reports that provide insight into service usage and costs as you deploy and operate cloud architecture They include detailed billing reports which you can access in the AWS Billing and Cost Management console These reports provide estimated costs that you can break down in different ways (by period account resource or custom resource tags) to help monitor and forecast monthly charges You can analyze this information to optimize your infrastructure and maximize your return on investment using elastic ity ArchivedAmazon Web Services – Automating Elasticity Page 2 Cost Explorer is another free tool that you can use to view your costs and find ways to take advantage of elasticity You can view data up to the la st 13 months forecast how much you are likely to spend for the next 3 months and get recommendations on what Reserved Instances to purchase You can also use Cost Explorer to see patterns in how much you spend on AWS resources over time identify areas t hat need further inquiry and see trends that can help you understand your costs In addition you can specify time ranges for the data as well as view time data by day or by month Tagging Resources Tagging resources gives you visibility and control over cloud IT costs down to seconds and pennies by team and application Tagging lets you assign custom metadata to instances images and other resources For example you can categorize resources by owner purpose or environment which help s you organize th em and assign cost accountability When resources are accurately tagged automation tools can identify key characteristics of those resources needed to manage elasticity For example many customers run automated start/stop scripts that turn off developmen t environments during non business hours to reduce costs In this scenario Amazon Elastic Compute Cloud (Amazon EC2 ) instance tags provide a simple way to identify development instances that should keep running Automati ng Elasticity With AWS you can aut omate both volume based and time based elasticity which can provide significant savings For example companies that shut down EC2 instances outside of a 10 hour workday can save 70 % compared to running those instances 24 hours a day Automation becomes i ncreasingly important as environments grow larger and become more complex in which manually searching for elasticity savings becomes impractical Automation is powerful but you need to use it carefully It is important to minimize risk by giving people a nd systems only the minimum level of access required to perform necessary tasks Additionally you should anticipate exceptions to automation plans and consider different schedules and usage scenarios A one sizefitsall approach is seldom realistic even within the same ArchivedAmazon Web Services – Automating Elasticity Page 3 department Choose a flexible and customizable approach to accommodate your needs Automating Time Based Elasticity Most non production instances can and should be stopped when they are not being used Although it is possible to manually s hut down unused instances this is impractical at larger scales Let’s consider a few ways to automate time based elasticity AWS Instance Scheduler The AWS Instance Scheduler is a simple solution that allows you to create automatic start and stop schedules for your EC2 instances The solution is deployed using an AWS CloudFormation template which launches and configures the components necessary to automatically start and stop EC2 instances in all AWS Regions of your account During initial deployment you simply defi ne the AWS Instance Scheduler default start and stop parameters and the interval you want it to run These values are stored in Amazon DynamoDB and can be overridden or modified as necessary A custom resour ce tag identifies instances that should receive AWS Instance Scheduler actions The solution's recurring AWS Lambda function automatically starts and stops appropriately tagged EC2 instances You can review th e solution's custom Amazon CloudWatch metric to see a history of AWS Instance Scheduler actions Amazon EC2 API tools You can terminate instances programmatically using Amazon EC2 APIs specifically the StopInstances and TerminateInstances actions These APIs let you build your own schedules and automation tools When you stop an instance the root device and any other devices attached to the instance persist When you terminate an instanc e the root device and any other devices attached during the instance launch are automatically deleted For more information about the differences between rebooting stopping and terminating instances see Instance Lifecycle in the Amazon EC2 User Guide ArchivedAmazon Web Services – Automating Elasticity Page 4 AWS Lambda AWS Lambda serverless functions are another tool that you can use to shut down instances when they are not being used You can configure a Lambda function to start and stop instances when triggered by Amazon CloudWatch Events such as a specific time or utilization threshold For more information read this Knowledge Center topic AWS Data Pipeline AWS Data Pipeline is a web service that helps you reliably process and move data between different AWS compute and storage services as well as on premises data sources at specified intervals It can be used to stop and start Amazon EC2 instances by running AWS Command Li ne Interface (CLI) file commands on a set schedule AWS Data Pipeline runs as an AWS Identity and Access Management (IAM) role which eliminates key management requirements Amazon CloudWatch Amazon Cloud Watch is a monitoring service for AWS cloud resources and the applications you run on AWS You can use Amazon CloudWatch to collect and track metrics and log files set alarms and automatically react to changes in your AWS resources You can use Amazon Cl oudWatch alarms to automatically stop or terminate EC2 instances that have gone unused or underutilized for too long You can stop your instance if it has an Amazon Elastic Block Store (Amazon EBS) volume as its root device A stopped instance retains its instance ID and can be restarted A terminated instance is deleted For more information on the difference between stopping and terminating instances see the Stop and Start Your Instance in the Amazon EC2 User Guide For example you can create a group of alarms that first sends an email notification to developers whose instance ha s been underutilized for 8 hours and then terminat es that instance if its utilization has not improved after 24 hours For instructions on using this method see the Amazon CloudWatch User Guide Automatin g Volume Based Elasticity By taking advantage of volume based elasticity you can scale resources to match capacity The best tool for accomplishing this task is Amazon EC2 Auto ArchivedAmazon Web Services – Automating Elasticity Page 5 Scaling which you can use to optimize performance by automatically increasing the number of EC2 instances during demand spikes and decreasing capacity during lulls to reduce costs Amazon EC2 Auto Scaling is well suited for applications that have stable demand p atterns and for ones that experience hourly daily or weekly variability in usage Beyond Amazon EC2 Auto Scaling you can use AWS Auto Scaling to automatically scale resources for other AWS services in cluding: • Amazon Elastic Container Service (Amazon ECS) – You can configure your Amazon ECS service to use AWS Auto Scaling to adjust its desired count up or down in response to CloudWatch alarms For more informa tion read the documentation • Amazon EC2 Spot Fleets – A Spot Fleet can either launch instances (scale out) or terminate instances (scale in) within the range that you choose in response to one or more scaling policies For more information read the documentation • Amazon EMR clusters – Auto Scaling in Amazon EMR allows you to programmatically scale out and scale in core and task nodes in a cluster based on rules that you specify in a scaling policy For more information read the documentation • Amazon AppStream 20 stacks and fleets – You can define scaling policies that adjust the size of your fleet automatically based on a variety of utilization metrics and optimize the number of running instances to match user demand You can also choose to turn off automatic scaling and make the fle et run at a fixed size For more information read the documentation • Amazon DynamoDB – You can dynamically adjust provisioned throughput capacity in response to actual traffic patterns This enables a table or a global secondary index to increase its provisioned read and write capacity to handle sudden increases in traffic without throttling When the workload decrea ses AWS Auto Scaling decreases the throughput so that you don't pay for unused provisioned capacity For more information read to the documentation You can also read our blog post Auto Scaling for Amazon DynamoDB ArchivedAmazon Web Services – Automating Elasticity Page 6 Conclusion The elasticity of cloud services is a powerful way to optimize costs By combining tagging monitori ng and automation your organization can match its spending to its needs and put resources where they provide the most value For more information about elasticity and other cost management topics see the AWS Billing and Cost Management documentation Automation tools can help minimize some of the management and administrative tasks associated with an IT deployment Similar to the benefits from application services an automated or DevOps approach to your AWS infrastructure will provide scalability and elasticity with minimal manual intervention This also provides a level of control over your AWS environment and the associated spending For example when engineers or developers are allowed to provision AWS resources only through an established process a nd use tools that can be managed and audited (for example a provisioning portal such as AWS Service Catalog) you can avoid the expense and waste that results from simply turning on (and most often leaving on) standalone resources Contributors The follow ing individuals and organizations contributed to this document: • Amilcar Alfaro Sr Product Marketing Manager AWS • Erin Carlson Marketing Manager AWS • Keith Jarrett WW BD Lead – Cost Optimization AWS Business Development Document History Date Description March 2020 Minor revisions March 2018 First publication ArchivedAmazon Web Services – Automating Elasticity Page 7
General
An_Introduction_to_High_Performance_Computing_on_AWS
High Performance Computing (HPC) has been key to solving the most complex problems in every industry and changing the way we work and live From weather modeling to genome mapping to the search for extraterrestrial intelligence HPC is helping to push the boundaries of what’s possible with advanced computing technologies Once confined to government labs large enterprises and select academic organizations today it is found across a wide range of industries In this paper we will discuss how cloud services put the world’s most advanced computing capabilities within reach for more organizations helping them to innovate faster and gain a competitive edge We will discuss the advantages of running HPC workloads on Amazon Web Services (AWS) with Intel® Xeon® technology compared to traditional onpremises architectures We will also illustrate these benefits in actual deployments across a variety of industries High Performance Computing on AWS Redefines What is Possible In 2017 the market for cloud HPC solutions grew by 44% compared to 2016i https://awsamazoncom/hpc 2HPC FUNDAMENTALS Although HPC applications share some common building blocks they are not all similar HPC applications are often based on complex algorithms that rely on high performing infrastructure for efficient execution These applications need hardware that includes high performance processors memory and communication subsystems For many applications and workloads the performance of compute elements must be complemented by comparably high performance storage and networking elements Some may demand high levels of parallel processing but not necessarily fast storage or high performance interconnect Other applications are interconnectsensitive requiring low latency and high throughput networking Similarly there are many I/Osensitive applications that without a very fast I/O subsystem will run slowly because of storage bottlenecks And still other applications such as game streaming video encoding and 3D application streaming need performance acceleration using GPUs Today many large enterprises and research institutions procure and maintain their own HPC infrastructure This HPC infrastructure is shared across many applications and groups within the organization to maximize utilization of this significant capital investment Cloudbased services have opened up a new frontier for HPC Moving HPC workloads to the cloud can provide near instant access to virtually unlimited computing resources for a wider community of users and can support completely new types of applications Today organizations of all sizes are looking to the cloud to support their most advanced computing applications For smaller enterprises cloud is a great starting point enabling fast agile deployment without the need for heavy capital expenditure For large enterprises cloud provides an easier way to tailor HPC infrastructure to changing business needs and to gain access to the latest technologies without having to worry about upfront investments in new infrastructure or ongoing operational expenses When compared to traditional onpremises HPC infrastructures cloud offers significant advantages in terms of scalability flexibility and costONPREMISES HPC HAS ITS LIMITS Today onpremises HPC infrastructure handles most of the HPC workloads that enterprises and research institutions employ Most HPC system administrators maintain and operate this infrastructure at varying levels of utilization However business is always competitive so efficiency needs to be coupled with the flexibility and opportunity to innovate continuously Some of the challenges with onpremises HPC are well known These include long procurement cycles high initial capital investment and the need for midcycle technology refreshes For most organizations planning for and procuring an HPC system is a long and arduous process that involves detailed capacity forecasting and system evaluation cycles Often the significant upfront capital investment required is a limiting factor for the amount of capacity that can be procured Maintaining the infrastructure over its lifecycle is an expensive proposition as well Previously technology refreshes every three years was enough to stay current with the compute technology and incremental demands from HPC workloads However to take advantage of the faster pace of innovation HPC customers are needing to refresh their infrastructure more often than before And it is worth the effort IDC reports that for every $1 spent on HPC businesses see $463 in incremental revenues and $44 in incremental profit so delaying incremental investments in HPC – and thus delaying the innovations it brings – has large downstream effects on the businesshttps://awsamazoncom/hpc 3Stifled Innovation: Often the constraints of onpremises infrastructure mean that use cases or applications that did not meet the capabilities of the hardware were not considered When engineers and researchers are forced to limit their imagination to what can be tried out with limited access to infrastructure the opportunity to think outside the box and tinker with new ideas gets lost Reduced Productivity: Onpremises systems often have long queues and wait times that decrease productivity They are managed to maximize utilization – often resulting in very intricate scheduling policies for jobs However even if a job requires only a couple of hours to run it may be stuck in a prioritized queue for weeks or months – decreasing overall productivity and limiting innovation In contrast with virtually unlimited capacity the cloud can free users to get the same job done but much faster without having to stand in line behind others who are just as eager to make progressLimited Scalability and Flexibility: HPC workloads and their demands are constantly changing and legacy HPC architectures cannot always keep pace with evolving requirements For example infrastructure elements like GPUs containers and serverless technologies are not readily available in an onpremises environment Integrating new OS or container capabilities – or even upgrading libraries and applications – is a major systemwide undertaking And when an onpremises HPC system is designed for a specific application or workload it’s difficult and expensive to take on new HPC applications as well as forecast and scale for future (frequently unknown) requirements Lost Opportunities: Onpremises HPC can sometimes limit an organization’s opportunities to take full advantage of the latest technologies For example as organizations adopt leadingedge technologies like artificial intelligence/ machine learning technologies (AI/ML) and visualization the complexity and volume of data is pushing on premises infrastructure to its limits Furthermore most AI/ML algorithms are cloudnative These algorithms will deliver superior performance on large data sets when running in the cloud especially with workloads that involve transient data that does not need to be stored long term There are other limitations of onpremises HPC infrastructure that are less visible and so are often overlooked leading to misplaced optimization efforts https://awsamazoncom/hpc 4CLOUD IS A BETTER WAY TO HPC To move beyond the limits of onpremises HPC many organizations are leveraging cloud services to support their most advanced computing applications Flexible and agile the cloud offers strong advantages compared to traditional onpremises HPC approaches HPC on AWS with Intel® Xeon® processors deliver significant leaps in compute performance memory capacity and bandwidth and I/O scalability The highly customizable computing platform and robust partner community enable your staff to imagine new approaches so they can fail forward faster delivering more answers to more questions without the need for costly onpremises upgrades In short AWS frees you to rethink your approach to every HPC and big data analysis initiative and invites your team to ask questions and seek answers as often as possible Innovate Faster with a Highly Scalable Infrastructure Moving HPC workloads to the cloud can bring down barriers to innovation by opening up access to virtually unlimited capacity and scale And one of the best features of working in a cloud environment is that when you solve a problem it stays solved You’re not revisiting it every time you do a major systemwide software upgrade or a biannual hardware refresh Limits on scale and capacity with onpremises infrastructure usually led to organizations being reluctant to consider new use cases or applications that exceeded their capabilities Running HPC in the cloud enables asking the business critical questions they couldn’t address before and that means a fresh look at project ideas that were shelved due to infrastructure constraints Migrating HPC applications to AWS eliminates the need for tradeoffs between experimentation and production AWS and Intel bring the most costeffective scalable solutions to run the most computationallyintensive applications ondemand Now research development and analytics teams can test every theory and process every data set without straining onpremises systems or stalling other critical work streams Flexible configuration and virtually unlimited scalability allow engineers to grow and shrink the infrastructure as workloads dictate not the other way around Additionally with easy access to a broad range of cloudbased services and a trusted partner network researchers and engineers can quickly adopt tested and verified HPC applications so that they can innovate faster without having to reinvent what already exists Increase Collaboration with Secure Access to Clusters Worldwide Running HPC workloads on the cloud enables a new way for globally distributed teams to collaborate securely With globallyaccessible shared data engineers and researchers can work together or in parallel to get results faster For example the use of the cloud for collaboration and visualization allows a remote design team to view and interact with a simulation model in near real time without the need to duplicate and proliferate sensitive design data Using the cloud as a collaboration platform also makes it easier to ensure compliance with everchanging industry regulations The AWS cloud is compliant with the latest revisions of GDPR HIPAA FISMA FedRAMP PCI ISO 27001 SOC 1 and other regulations Encryption and granular permission features guard sensitive data without interfering with the ability to share data across approved users and detailed audit trails for virtually every API call or cloud orchestration action means environments can be designed to address specific governance needs and submit to continuous monitoring and surveillance With a broad global presence and the wide availability of Intel® Xeon® technologypowered Amazon EC2 instances HPC on AWS enables engineers and researchers to share and collaborate efficiently with team members across the globe without compromising on securityhttps://awsamazoncom/hpc 5 Optimize Cost with Flexible Resource Selection Running HPC in the cloud enables organizations to select and deploy an optimal set of services for their unique applications and to pay only for what they use Individuals and teams can rapidly scale up or scale down resources as needed commissioning or decommissioning HPC clusters in minutes instead of days or weeks With HPC in the cloud scientists researchers and commercial HPC users can gain rapid access to resources they need without a burdensome procurement process Running HPC in the cloud also minimizes the need for job queues Traditional HPC systems require researchers and analysts to submit their projects to open source or commercial cluster and job management tools which can be time consuming and vulnerable to submission errors Moving HPC workloads to the cloud can help increase productivity by matching the infrastructure configuration to the job With onpremises infrastructure engineers were constrained to running their job on the available configuration With HPC in the cloud every job (or set of related jobs) can run on its own ondemand cluster customized for its specific requirements The result is more efficient HPC spending and fewer wasted resources AWS HPC solutions remove the traditional challenges associated with onpremises clusters: fixed infrastructure capacity technology obsolescence and high capital expenditures AWS gives you access to virtually unlimited HPC capacity built from the latest technologies You can quickly migrate to newer more powerful Intel® Xeon® processorbased EC2 instances as soon as they are made available on AWS This removes the risk of onpremises CPU clusters becoming obsolete or poorly utilized as your needs change over time As a result your teams can trust that their workloads are running optimally at every stage Data Management & Data Transfer Running HPC applications in the cloud starts with moving the required data into the cloud AWS Snowball and AWS Snowmobile are data transport solutions that use devices designed to be secure to transfer large amounts of data into and out of the AWS Cloud Using Snowball addresses common challenges with large scale data transfers including high network costs long transfer times and security concerns AWS DataSync is a data transfer service that makes it easy for you to automate moving data between onpremises storage and Amazon S3 or Amazon Elastic File System (Amazon EFS) DataSync automatically handles many of the tasks related to data transfers that can slow down migrations or burden your IT operations including running your own instances handling encryption managing scripts network optimization and data integrity validation AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS Using AWS Direct Connect you can establish private connectivity between AWS and your datacenter office or colocation environment which in many cases can reduce your network costs increase bandwidth throughput and provide a more consistent network experience than Internetbased connectionsAWS AND INTEL® DELIVER A COMPLETE HPC SOLUTION AWS HPC solutions with Intel® Xeon® technologypowered compute instances put the full power of HPC in reach for organizations of every size and industry AWS provides a comprehensive set of components required to power today’s most advanced HPC applications giving you the ability to choose the most appropriate mix of resources for your specific workload Key products and services that make up the HPC on AWS solution include: https://awsamazoncom/hpc 5https://awsamazoncom/hpc 6 https://awsamazoncom/hpc 6Compute The AWS HPC solution lets you choose from a variety of compute instance types that can be configured to suit your needs including the latest Intel® Xeon® processor powered CPU instances GPUbased instances and field programmable gate array (FPGA)powered instances The latest Intelpowered Amazon EC2 instances include the C5n C5d and Z1d instances C5n instances feature the Intel Xeon Platinum 8000 series (SkylakeSP) processor with a sustained all core Turbo CPU clock speed of up to 35 GHz C5n instances provide up to 100 Gbps of network bandwidth and up to 14 Gbps of dedicated bandwidth to Amazon EBS C5n instances also feature 33% higher memory footprint compared to C5 instances For workloads that require access to highspeed ultralow latency local storage AWS offers C5d instances equipped with local NVMebased SSDs Amazon EC2 z1d instances offer both high compute capacity and a high memory footprint High frequency z1d instances deliver a sustained all core frequency of up to 40 GHz the fastest of any cloud instance For HPC codes that can benefit from GPU acceleration the Amazon EC2 P3dn instances feature 100 Gbps network bandwidth (up to 4x the bandwidth of previous P3 instances) local NVMe storage the latest NVIDIA V100 Tensor Core GPUs with 32 GB of GPU memory NVIDIA NVLink for faster GPUtoGPU communication AWScustom Intel® Xeon® Scalable (Skylake) processors running at 31 GHz sustained allcore Turbo AWS Auto Scaling monitors your applications and automatically adjusts capacity to maintain steady predictable performance at the lowest possible cost Using AWS Auto Scaling it’s easy to setup application scaling for multiple resources across multiple services in minutes Networking Amazon EC2 instances support enhanced networking that allow EC2 instances to achieve higher bandwidth and lower interinstance latency compared to traditional virtualization methods Elastic Fabric Adapter (EFA) is a network interface for Amazon EC2 instances that enables you to run HPC applications requiring high levels of internode communications at scale on AWS Its custombuilt operating system (OS) bypass hardware interface enhances the performance of interinstance communications which is critical to scaling HPC applications AWS also offers placement groups for tightlycoupled HPC applications that require low latency networking Amazon Virtual Private Cloud (VPC) provides IP connectivity between compute instances and storage components Storage Storage options and storage costs are critical factors when considering an HPC solution AWS offers flexible object block or file storage for your transient and permanent storage requirements Amazon Elastic Block Store (Amazon EBS) provides persistent block storage volumes for use with Amazon EC2 Provisioned IOPS allows you to allocate storage volumes of the size you need and to attach these virtual volumes to your EC2 instances Amazon Simple Storage Service (S3) is designed to store and access any type of data over the Internet and can be used to store the HPC input and output data long term and without ever having to do a data migration project again Amazon FSx for Lustre is a high performance file storage service designed for demanding HPC workloads and can be used on Amazon EC2 in the AWS cloud Amazon FSx for Lustre works natively with Amazon S3 making it easy for you to process cloud data sets with high performance file systems When linked to an S3 bucket an FSx for Lustre file system transparently presents S3 objects as files and allows you to write results back to S3 You can also use FSx for Lustre as a standalone highperformance file system to burst your workloads from onpremises to the cloud By copying onpremises data to an FSx for Lustre file system you can make that data available for fast processing by compute instances running on AWS Amazon Elastic File System (Amazon EFS) provides simple scalable file storage for use with Amazon EC2 instances in the AWS Cloudhttps://awsamazoncom/hpc 7Automation and Orchestration Automating the job submission process and scheduling submitted jobs according to predetermined policies and priorities are essential for efficient use of the underlying HPC infrastructure AWS Batch lets you run hundreds to thousands of batch computing jobs by dynamically provisioning the right type and quantity of compute resources based on the job requirements AWS ParallelCluster is a fully supported and maintained open source cluster management tool that makes it easy for scientists researchers and IT administrators to deploy and manage High Performance Computing (HPC) clusters in the AWS Cloud NICE EnginFrame is a web portal designed to provide efficient access to HPCenabled infrastructure using a standard browser EnginFrame provides you a userfriendly HPC job submission job control and job monitoring environment Operations & Management Monitoring the infrastructure and avoiding cost overruns are two of the most important capabilities that can help an HPC system administrators efficiently manage your organization’s HPC needs Amazon CloudWatch is a monitoring and management service built for developers system operators site reliability engineers (SRE) and IT managers CloudWatch provides you with data and actionable insights to monitor your applications understand and respond to systemwide performance changes optimize resource utilization and get a unified view of operational health AWS Budgets gives you the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amountVisualization Tools The ability to visualize results of engineering simulations without having to move massive amounts of data to/from the cloud is an important aspect of the HPC stack Remote visualization helps accelerate the turnaround times for engineering design significantly NICE Desktop Cloud Visualization enables you to remotely access 2D/3D interactive applications over a standard network In addition Amazon AppStream 20 is another fully managed application streaming service that can securely deliver application sessions to a browser on any computer or workstation Security and Compliance Security management and regulatory compliance are other important aspects of running HPC in the cloud AWS offers multiple security related services and quicklaunch templates to simplify the process of creating a HPC cluster and implementing best practices in data security and regulatory compliance The AWS infrastructure puts strong safeguards in place to help protect customer privacy All data is stored in highly secure AWS data centers AWS Identity and Access Management (IAM) provides a robust solution for managing users roles and groups that have rights to access specific data sources Organizations can issue users and systems individual identities and credentials or provision them with temporary access credentials using the Amazon Security Token Service (Amazon STS) AWS manages dozens of compliance programs in its infrastructure This means that segments of your compliance have already been completed AWS infrastructure is compliant with many relevant industry regulations such as HIPAA FISMA FedRAMP PCI ISO 27001 SOC 1 and others https://awsamazoncom/hpc 7https://awsamazoncom/hpc 8Flexible Pricing and Business Models With AWS capacity planning worries become a thing of the past AWS offers ondemand pricing for shortterm projects contract pricing for longterm predictable needs and spot pricing for experimental work or research groups with tight budgets AWS customers enjoy the flexibility to choose from any combination of payasyougo options procuring only the capacity they need for the duration that it’s needed and AWS Trusted Advisor will alert you first to any costsaving actions you can take to minimize your bill This simplified flexible pricing structure and approach allows research institutions to break free from the time and budget constraining CapExintensive data center model With HPC on AWS organizations can flexibly tune and scale their infrastructure as workloads dictate instead of the other way around AWS Partners and Marketplace For organizations looking to build highly specific solutions AWS Marketplace is an online store for applications and services that build on top of AWS AWS partner solutions and AWS Marketplace lets organizations immediately take advantage of partners’ builtin optimizations and best practices leveraging what they’ve learned from building complex services on AWS A variety of open source HPC applications are also available on the AWS Marketplace HPC ON AWS DELIVERS ADVANTAGES FOR A RANGE OF HPC WORKLOADS AWS cloud provides a broad range of scalable flexible infrastructure solutions that organizations can select to match their workloads and tasks This gives HPC users the ability to choose the most appropriate mix of resources for their specific applications Let us take a brief look at the advantages that HPC on AWS delivers for these workload types Tightly Coupled HPC: A typical tightly coupled HPC application often spans across large numbers of CPU cores in order to accomplish demanding computational workloads To study the aerodynamics of a new commercial jet liner design engineers often run computational fluid dynamics simulations using thousands of CPU cores Global climate modeling applications are also executed at a similar scale AWS cloud provides scalable computing resources to execute such applications These applications can be deployed on the cloud at any scale Organizations can set a maximum number of cores per job dependent on the application requirements aligning it to criteria like model size frequency of jobs cost per computation and urgency of the job completion A significant benefit of running such workloads on AWS is the ability to scale out to experiment with more tunable parameters For example an engineer performing electromagnetic simulations can run larger numbers of parametric sweeps in his Design of Experiment (DoE) study using very large numbers of Amazon EC2 OnDemand instances and using AWS Auto Scaling to launch independent and parallel simulation jobs Such DoE jobs would often not be possible because of the hardware limits of onpremises infrastructure A further benefit for such an engineer is to use Amazon Simple Storage Service (S3) NICE DCV and other AWS solutions like AI/ML services to aggregate analyze and visualize the results as part of a workflow pipeline any element of which can be spun up (or down) independently to meet needs Amazon EC2 features that help with applications in this category also include EC2 placement groups and enhanced networking for reduced nodetonode latencies and consistent network performance Loosely Coupled Grid Computing: The cloud provides support for a variety of loosely coupled grid computing applications that are designed for faulttolerance enabling individual nodes to be added or removed during the course of job execution This category of applications includes Monte Carlo simulations for financial risk analysis material science study for proteomics and more A typical job distributes independent computational workloads across large numbers of CPU cores or nodes in a grid without high demand for high performance nodetonode interconnect or on highperformance storage The cloud lets organizations deliver the faulttolerance https://awsamazoncom/hpc 9these applications require and choose the instance types they require for specific compute tasks that they plan to execute Such applications are ideally suited to Amazon EC2 Spot instances which are EC2 instances that opportunistically take advantage of Amazon EC2’s spare computing capacity Coupled with Amazon EC2 Auto Scaling and jobs can be scaled up when excess spare capacity makes Spot instances cheaper than normal AWS Batch brings all these capabilities together in a single batchoriented service that is easy to use containerfocused for maximum portability and integrates with a range of commercial and open source workflow engines to make job orchestration easy High Volume Data Analytics and Interpretation: When grid and cluster HPC workloads handle large amounts of data their applications require fast reliable access to many types of data storage AWS services and features that help HPC users optimize for data intensive computing include Amazon S3 Amazon Elastic Block Store (EBS) and Amazon EC2 instance types that are optimized for high I/O performance (including those configured with solidstate drive (SSD) storage) Solutions also exist for creating high performance virtual network attached storage (NAS) and network file systems (NFS) in the cloud allowing applications running in Amazon EC2 to access high performance scalable cloudbased shared storage resources Example applications in this category include genomics highresolution image processing and seismic data processing Visualization: Using the cloud for collaboration and visualization makes it much easier for members in global organizations to share their digital data instantly from any part of the world For example it lets subcontractors or remote design teams view and interact with a simulation model in near real time from any location They can securely collaborate on data from anywhere without the need to duplicate and share it AWS services that enable these types of workloads include graphics optimized instances remote visualization services like NICE DCV and managed services like Amazon Workspaces and Amazon AppStream 20Accelerated Computing: There are many HPC workloads that can benefit from offloading computationintensive tasks to specialized hardware coprocessors such as GPUs or FPGAs Many tightlycoupled and visualization workloads are apt for accelerated computing AWS HPC solutions offer the flexibility to choose from many available CPU GPU or FPGAbased instances to deploy optimized infrastructure to meet the needs of specific applications Machine Learning and Artificial Intelligence: Machine learning requires a broad set of computing resource options ranging from GPUs for computeintensive deep learning FPGAs for specialized hardware acceleration to highmemory instances for inference study With HPC on AWS organizations can select instance types and services to fit their machine learning needs They can choose from a variety of CPU GPU FPGA memory storage and networking options and tailor instances to their specific requirements whether they are training models or running inference on trained models AWS uses the latest Intel® Xeon®Scalable CPUs which are optimized for machine learning and AI workloads at scale The Intel® Xeon®Scalable processors incorporated in AWS EC2 C5 instances along with optimized deep learning functions in the Intel MKLDNN library provide sufficient compute for deep learning training workloads (in addition to inference classical machine learning and other AI algorithms) In addition CPU and GPU optimized frameworks such as TensorFlow MxNet and PyTorch are available in Amazon Machine Image (AMI) format for customers to deploy their AI workloads on optimized software and hardware stacks Recent advances in distributed algorithms have also enabled the use of hundreds of servers to reduce the time to train from weeks to minutes Data scientists can get excellent deep learning training performance using Amazon EC2 and further reduce the timetotrain by using multiple CPU nodes scaling near linearly to hundreds of nodeshttps://awsamazoncom/hpc 10Life Sciences and Healthcare Running HPC workloads on AWS lets healthcare and life sciences professionals easily and securely scale genomic analysis and precision medicine applications For AWS users the scalability is builtin bolstered by an ecosystem of partners for tools and datasets designed for sensitive data and workloads They can efficiently dynamically store and compute their data collaborate with peers and integrate findings into clinical practice—while conforming with security and compliance requirements For example BristolMyers Squibb (BMS) a global biopharmaceutical company used AWS to build a secure selfprovisioning portal for hosting research The solution lets scientists run clinical trial simulations ondemand and enables BMS to set up rules that keep compute costs low Computeintensive clinical trial simulations that previously took 60 hours are finished in only 12 hours on the AWS Cloud Running simulations 98% faster has led to more efficient less costly clinical trials—and better conditions for patients DRIVING INNOVATION ACROSS INDUSTRIES Every industry tackles a different set of challenges AWS HPC solutions available with the power of the latest Intel technologies help companies of all sizes in nearly every industry achieve their HPC results with flexible configuration options that simplify operations save money and get results to market faster These workloads span the traditional HPC applications like genomics life sciences research financial risk analysis computeraided design and seismic imaging to the emerging applications like machine learning deep learning and autonomous vehicles “The time and money savings are obvious but probably what is most important factor is we are using fewer subjects in these trials we are optimizing dosage levels we have higher drug tolerance and safety and at the end of the day for these kids it’s fewer blood samples” Sr Solutions Specialist BristolMyers SquibbFinancial Services Insurers and capital markets have long been utilizing grid computing to power actuarial calculations determine capital requirements model risk scenarios price products and handle other key tasks Taking these computeintensive workloads out of the data center and moving them to AWS helps them boost speed scale better and save money For example MAPRE the largest insurance company in Spain needed fast flexible environments in which to develop sales management insurance policy applications The firm was looking for a costeffective technology platform that could deliver rapid analysis and enable quick deployment of development environments in remote installations sites Its onpremises infrastructure simply could not support these needs The company turned to AWS for high performance computing risk analysis of customer data and to create test and development environments for its commercial application “The onpremises hardware investment for three years cost approximately €15 million whereas the AWS infrastructure cost the company €180000 for the same period a savings of 88 percent” MAPFRE https://awsamazoncom/hpc 11KEEPING PACE WITH CHANGING FINANCIAL REGULATIONS AWS customers in financial services are preparing for new Fundamental Review of Trading Book (FRTB) regulations that will come into effect between 2019 and 2021 As part of the proposed regulations these financial services institutions will need to perform computeintensive “value at risk” calculations in the four hours after trading ends in New York and begins in Tokyo The periodic nature of the calculation along with the amount of processing power and storage needed to run it within four hours made it a great fit for an environment where a vast amount of costeffective compute power is available on an ondemand basis To help its financial services customers meet these new regulations AWS worked with TIBCO (an onpremises marketleading infrastructure platform for grid and elastic computing) to run a proof of concept grid in AWS Cloud The grid grew to 61299 Spot instances with 13 million vCPUs and cost approximately $30000 an hour to run This proofofconcept is a strong example of the potential for AWS to deliver a vast amount of cost effective compute power on an ondemand basishttps://awsamazoncom/hpc 12Design and Engineering Using simulations on AWS HPC infrastructure lets manufacturers and designers reduce costs by replacing expensive development of physical models with virtual ones during product development The result? Improved product quality shorter time to market and reduced product development costs TLG Aerospace in Seattle Washington put these capabilities to work to perform aerodynamic simulations on aircraft and predict the pressure and temperature surrounding airframes Its existing cloud provider was expensive and could not scale to handle more performanceintensive applications TLG turned to Amazon EC2 Spot instances which provide a way to use unused EC2 computing capacity at a discounted price The solution dramatically decreased simulation costs and can scale easily to take on new jobs as needed Energy and Geo Sciences Reducing runtimes for computeintensive applications like seismic analysis and reservoir simulation is just one of the many ways the energy and geosciences industry has been utilizing HPC applications in the cloud By moving HPC applications to the cloud organizations reduce job submission time track runtime and efficiently manage the large datasets associated with daily workloads For example using AWS ondemand computing resources Zenotech a simulation service provider can power simulations that help energy companies support advanced reservoir models“We saw a 75% reduction in the cost per CFD simulation as soon as we started using Amazon EC2 Spot instances We are able to pass those savings along to our customers–and be more competitive” TLG Aerospace Using the resources available within a typical small company it would take several years to complete a sophisticated reservoir simulation Zenotech completed it at a computing cost for AWS resources of only $750 over a 12day periodhttps://awsamazoncom/hpc 13Media and Entertainment The movie and entertainment industries are shifting content production and post production to cloudbased HPC to take advantage of highly scalable elastic and secure cloud services to accelerate content production and reduce capital infrastructure investment Content production and postproduction companies are leveraging the cloud to accelerate and streamline production editing and rendering workloads with highly scalable cloud computing and storage One design and visual effects (VFX) company Fin Design + Effects needed the ability to access vast amounts of compute capacity when big deadlines came around Its onpremises render servers had a finite capacity and were difficult and expensive to scale Fin started by using AWS Direct Connect to scale its rendering capabilities by establishing a dedicated Gigabit network connection from the Fin data center to AWS Fin is also taking advantage of Amazon EC2 Spot instances Fin now has the agility to add compute resources on the fly to meet lastminute project demands AI/ML and Autonomous Vehicles The AI revolution which started with the rapid increase in accuracy brought by deep learning methods has the potential to revolutionize a variety of industries Autonomous driving is a particularly popular use case for AI/ML Developing and deploying autonomous vehicles requires the ability to collect store and manage massive amounts of data high performance computing capacity and advanced deep learning frameworks along with the capability to do realtime processing of local rules and events in the vehicle AWS’s virtually unlimited storage and compute capacity and support for popular deep learning frameworks help accelerate algorithm training and testing and drive faster time to market“We are reducing our operational costs by 50 percent by using Amazon EC2 Spot instances” Fin Design Developing and deploying autonomous vehicles requires the ability to collect store and manage massive amounts of data high performance computing capacity and advanced deep learning frameworks SUMMARY AND RECOMMENDATION Technology continues to change rapidly and it’s clear that HPC has a critical role to play in enabling organizations to innovate faster and enable them to adopt other leadingedge technologies like AI/ML and IoT AWS puts the advanced capabilities of High Performance Computing in reach for more people and organizations while simplifying processes like management deployment and scaling Accessible flexible and cost effective it lets organizations unleash the creativity of their engineers analysts and researchers from the limitations of onpremises infrastructures Unlike traditional onpremise HPC systems AWS offers virtually unlimited capacity to scale out HPC infrastructure It also provides the flexibility for organizations to adapt their HPC infrastructure to changing business priorities With flexible deployment and pricing models it lets organizations of all sizes and industries take advantage of the most advanced computing capabilities available HPC on AWS lets you take a fresh approach to innovation to solve the world’s most complex problems Learn more about running your HPC workloads on AWS at http://awsamazoncom/hpc i “HPC Market Update ISC18” Intersect360 Research 2018
General
Using_AWS_in_the_Context_of_New_Zealand_Privacy_Considerations
Using AWS in the Context of New Zealand Privacy Considerations First published Septembe r 2014 Updated August 17 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppl iers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 Considerations relevant to privacy and data protection 2 AWS shared responsibility approach to managing cloud security 3 How is customer content secured? 3 What does the shared responsibility model mean for the security of c ustomer content? 4 Understanding security OF the cloud 4 Understanding security IN the cloud 5 AWS Regions: Where will content be stored? 7 How can customers select their Region(s)? 8 Transfer of personal information cross border 9 Who can access customer content? 10 Customer control over content 10 AWS access to customer content 10 Government rights of access 10 Privacy and data protection in New Zealand: The Privacy Act 11 Privacy breaches 19 Considerations 20 Further reading 21 AWS Artifact 22 Document revisions 22 Abstract This document provides information to assist customers who want to use Amazon Web Services (AWS) to store or process content containing personal information in the context of key privacy considerations and the New Zealand Privacy Act 2020 (NZ) I t helps customers understand: • The way AWS services operate including how customers can address security and encrypt their content • The geographic locations where customers can choose to store content and other relevant considerations • The respective roles the customer and AWS each play in managing and securing content stored on AWS servicesAmazon Web Services Using AWS in the Context of New Zealand Privacy Considerations 1 Introduction This whitepaper focuses on typical questions asked by AWS customers when they are considering the implications o f the New Zealand Privacy Act on their use of AWS services to store or process content containing personal information There will also be other relevant considerations for each customer to address For example a customer may need to comply with industry specific requirements and the laws of other jurisdictions where that customer conducts business or contractual commitments a customer makes to a third party This paper is provided solely for informational purposes It is not legal advice and should not be relied on as legal advice As each customer’s requirements will differ AWS strongly encourages its customers to obtain appropriate advice on their implementation of privacy and data protection requirements and on applicable laws and other requirement s relevant to their business When we refer to content in this paper we mean software (including virtual machine images) data text audio video images and other content that a customer or any end user stores or processes using AWS services For exam ple a customer’s content includes objects that the customer stores using Amazon Simple Storage Service (Amazon S3) files stored on an Amazon Elastic Block Store (Amazon EBS) volume or the contents of an Amazon DynamoDB database table Such content may but will not necessarily include personal information relating to that customer its end users or third parties The terms of the AWS Customer Agreement or any other relevant agreement with us governing the use of AWS services apply to customer content Customer content does not include information that a customer provides to us in connection with the creation or administration of its AWS accounts such as a customer’s names phone numbers email addre sses and billing information —we refer to this as account information and it is governed by the AWS Privacy Notice Our business changes constantly and our Privacy Notice may also change We recommend checkin g our website frequently to see recent changes Amazon Web Services Using AWS in the Context of New Zealand Privacy Considerations 2 Considerations relevant to privacy and data protection Storage of content presents all organizations with a number of common practical matters to consider including: • Will the content be secure? • Where will c ontent be stored? • Who will have access to content? • What laws and regulations apply to the content and what is needed to comply with these? These considerations are not new and are not cloud specific They are relevant to internally hosted and operated syst ems as well as traditional third party hosted services Each may involve storage of content on third party equipment or on third party premises with that content managed accessed or used by third party personnel When using AWS services each AWS custome r maintains ownership and control of their content including control over: • What content they choose to store or process using AWS services • Which AWS services they use with their content • The AWS Region or Regions where their content is stored • The format structure and security of their content including whether it is masked anonymized or encrypted • Who has access to their AWS accounts and content and how those access rights are granted managed and revoked Because AWS customers retain ownership and control over their content within the AWS environment they also retain responsibilities relating to the security of that content as part of the AWS Shared Responsibility Model This shared responsibility model is fundamental to understanding the respective roles of the customer and AWS in the context of privacy and data protection requirements that may apply to content that customers choose to store or process using AWS services Amazon Web Services Using AWS in the Context of New Zealand Privacy Considerations 3 AWS shared responsibility approach to managing cloud security How is customer content secured ? Moving IT infrastructure to AWS creates a shared responsibility model between the customer and AWS as both the customer and AWS have important roles in the operation and management of security AWS operates manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the AWS services operate The customer is responsible for management of the guest operating system (including updates and security patches to the guest operating system) and associated application software as well as the configuration of the AWS provided security group firewall and other security related features The customer will g enerally connect to the AWS environment through services the customer acquires from third parties (for example internet service providers) AWS does not provide these connections and they are therefore part of the customer’s area of responsibility Custo mers should consider the security of these connections and the security responsibilities of such third parties in relation to their systems The respective roles of the customer and AWS in the shared responsibility model are shown in Figure 1 Amazon Web Services Using AWS in the Context of New Zealand Privacy Considerations 4 Figure 1 – AWS Shared Responsibility Model What does the shared responsibility model mean for the security of customer content? When evaluating the security of a cloud solution it is important for customers to understand and distinguish between: • Security measures that the cloud service provider (AWS) implements and operates – security of the cloud • Security measures that the customer implements and operates related to the security of customer content and applications that make use of AWS services – securi ty in the cloud While AWS manages security of the cloud security in the cloud is the responsibility of the customer as customers retain control of what security they choose to implement to protect their own content applications systems and networks – no differently than they would for applications in an on site data center Understanding security OF the cloud AWS is responsible for managing the security of the underlying cloud environment The AWS Cloud infrastructure has been architected to be one of the most flexible and Amazon Web Services Using AWS in the Context of New Zealand Privacy Considerations 5 secure cloud computing environments available designed to provide optimum availability w hile providing complete customer segregation It provides extremely scalable highly reliable services that enable customers to deploy applications and content quickly and securely at massive global scale if necessary AWS services are content agnostic i n that they offer the same high level of security to all customers regardless of the type of content being stored or the geographical Region in which they store their content AWS’ world class highly secure data centers utilize state oftheart electron ic surveillance and multi factor access control systems Data centers are staffed 24x7 by trained security guards and access is authorized strictly on a least privileged basis For a complete list of all the security measures built into the core AWS Cloud infrastructure and services see Best Processes for Security Identity & Compliance We are vigilant about our customers' security and have implemented sophisticated techn ical and physical measures against unauthorized access Customers can validate the security controls in place within the AWS environment through AWS certifications and reports including the AWS System & Organization Control (SOC) 1 21 and 32 reports I SO 270013 270174 270185 and 900 16 certifications and PCI DSS7 Attestation of Compliance Our ISO 27018 certification demonstrates that AWS has a system of controls in place that specifically address the privacy protection of customer content Thes e reports and certifications are produced by independent third party auditors and attest to the design and operating effectiveness of AWS security controls AWS compliance certifications and reports can be requested on the AWS Compliance Contact Us page For m ore information on AWS compliance certifications reports and alignment with best practices and standards see AWS Compliance Understanding security IN the cloud Customers retain ownership and control of their content when using AWS services Customers rather than AWS determine what content they store or process u sing AWS services Because it is the customer who decides what content to store or process using AWS services only the customer can determine what level of security is appropriate for the content they store and process using AWS Customers also have compl ete control over which services they use and whom they empower to access their content and services including what credentials will be required Customers control how they configure their environments and secure their content including whether they encry pt their content (at rest and in transit) and what other security features and tools they use and how they use them AWS does not change Amazon Web Services Using AWS in the Context of New Zealand Privacy Considerations 6 customer configuration settings as these settings are determined and controlled by the customer AWS customers have t he complete freedom to design their security architecture to meet their compliance needs This is a key difference from traditional hosting solutions where the provider decides on the architecture AWS enables and empowers the customer to decide when and h ow security measures will be implemented in the cloud in accordance with each customer's business needs For example if a higher availability architecture is required to protect customer content the customer may add redundant systems backups locations network uplinks etc to create a more resilient high availability architecture If restricted access to customer content is required AWS enables the customer to implement access rights management controls both on a systems level and through e ncryption on a data level To assist customers in designing implementing and operating their own secure AWS environment AWS provides a wide selection of security tools and features customers can use Customers can also use their own security tools and c ontrols including a wide variety of thirdparty security solutions Customers can configure their AWS services to leverage a range of such security features tools and controls to protect their content including sophisticated identity and access management tools security capabilities encryption and network security Examples of steps customers can take to help secure their content include implementing: • Strong password policies assigning appropriate permissions to users and taking robust steps to protect their access keys • Appropriate firewalls and network segmentation encrypting content and properly architecting systems to decrease the risk of data loss and unauthorized access Because customers rather than AWS control these important fact ors customers retain responsibility for their choices and for security of the content they store or process using AWS services or that they connect to their AWS infrastructure such as the guest operating system applications on their compute instances and content stored and processed in AWS storage databases or other services AWS provides an advanced set of access encryption and logging features to help customers manage their content effectively including AWS Key Management Service (AWS KMS) and AWS CloudTrail To assist customers in integrating AWS security controls into their existing control frameworks and help customers design and run security assessments of their organization’s use of AWS services AWS publishes a number of whitepapers relating to security governance risk and compliance; and a number of checklists and best practices Customers are also free to design and conduct Amazon Web Services Using AWS in the Context of New Zealand Privacy Considerations 7 security assessments according to their own pre ferences and can request permission to conduct scans of their cloud infrastructure as long as those scans are limited to the customer’s compute instances and do not violate the AWS Acceptable Use Policy AWS Regi ons: Where will content be stored? AWS data centers are built in clusters in various global Regions We refer to each of our data center clusters in a given country as an AWS Region Customers have access to a number of AWS Regions around the world8 including an Asia Pacific (Sydney) Region Customers can choose to use one Region all Regions or any combination of AWS Regions Figure 2 shows AWS Region locations as of April 20219 Figure 2 – AWS global Regions AWS cu stomers choose the AWS Region or Regions in which their content and servers will be located This allows customers with geographic specific requirements to establish environments in a location or locatio ns of their choice For example AWS customers in New Zealand can choose to deploy their AWS services exclusively in one AWS Region such as the Asia Pacific (Sydney) Region and store their content onshore in Australia if this is their preferred location If the customer makes this choice AWS will not move their content from Australia without the customer’s consent except as legally required Amazon Web Services Using AWS in the Context of New Zealand Privacy Considerations 8 Customers always retain control of which AWS Regions are used to store and process content AWS only stores and pr ocesses each customer ’s content in the AWS Region(s) and using the services chosen by the customer and otherwise will not move customer content without the customer’s consent except as legally required How can customers select their Region(s)? When us ing the AWS Management Console or in placing a request through an AWS Application Programming Interface (API) the customer identifies the particular AWS Region(s) where they want to use AWS services Figure 3 provides an example of the AWS Region select ion menu presented to customers when uploading content to an AWS storage service or provisioning compute resources using the AWS Management Console Figure 3 – Selecting AWS Global Regions in the AWS Management Console Customers can prescribe the AWS Region to be used for their AWS resources Amazon Virtual Private Cloud ( VPC) lets the customer provision a private isolated section of the AWS Cloud where the customer can launch AWS resources in a virtual network that the customer defines With Amazon V PC customers can define a virtual network topology that closely resembles a traditional network that might operate in their own data center Amazon Web Services Using AWS in the Context of New Zealand Privacy Considerations 9 Any resources launched by the customer into the VPC will be located in the AWS Region designated by the customer For example by creating a VPC in the Asia Pacific (Sydney) Region all resources launched into that VPC would only reside in the Asia Pacific (Sydney) Region This option can also be leveraged for other AWS Regions Transfer of personal information cross border In 2016 the European Commission approved and adopted the new General Data Protection Regulation (GDPR) The GDPR replaced the EU Data Protection Directive as well as all local laws relating to it All AWS services comply with the GDPR AWS provide s customers with services and resources to help them comply with GDPR requirements that may apply to their operations These include adherence to the CISPE code of conduct granular data access controls monitoring and l ogging tools encryption key management audit capability adherence to IT security standards and Cloud Computing Compliance Controls Catalogue ( C5) attestations For additional information visit the AWS General Data Protection Regulation (GDPR) Center and see the Navigating GDPR Compliance on AWS whitepaper When using AW S services customer s may choose to transfer content containing personal information cross border and they will need to consider the legal requirements that apply to such transfers AWS provides a Data Processing Addendum that includes the Standard Contractual Clauses 2010/8 7/EU (often referred to as Model Clauses ) to AWS customers transferring content containing personal data (as defined in the GDPR) from the EU to a country outside of the European Economic Area (EEA) With our EU Data Processing Addendum and Model Clauses AWS customers who want to transfer personal data —whether established in Europe or a global company operating in the European Economic Area —can do so with the knowledge that their personal data on AWS will be given the same high level of protection it receives in the EEA The AWS Data Processing Addendum is incorporated in the AWS Service Terms and applies automatically to the extent the GDPR applies to the customer’s processing of personal data on AWS Amazon Web Services Using AWS in the Context of New Z ealand Privacy Considerations 10 Who can access customer content? Customer control over content Customers using AWS maintain and do not release effective control over their content within the AWS environment Customers can perform the following: • Determine where their content will be located for example the type of storage they use on AWS and the geographic location (by AWS Region) of that storage • Control the format structure and security of their content including whether it is masked anonymized or encrypted AWS offers customers options to implement strong encryption for their customer content in transit or at rest; and also provides customers with the option to manage their own encryption keys or use third party encryption mechanisms of their choice • Manage other access controls such as identity access management permissions and security credentials This enables AWS customers to control the entire lifecycle of their content on AWS and manage their content in accordance with their own specific needs including content classification access control retention and disposal AWS access to customer content AWS makes available to each customer the compute storage database networking or other services as described on our website Customers have a number of options to encrypt their content when using the services including using AWS encryption features such as AWS KMS managing their own encryption keys or using a third party encryption mechanism of their own choice AWS does not access or use customer content without the customer’s consent except as legally req uired AWS never uses customer content or derives information from it for other purposes such as marketing or advertising Government rights of access Queries are often raised about the rights of domestic and foreign government agencies to access content h eld in cloud services Customers are often confused about issues of data sovereignty including whether and in what circumstances governments may have access to their content The local laws that apply in the jurisdiction where the content is located are a n important consideration for some customers However customers also Amazon Web Services Using AWS in the Context of New Zealand Privacy Considerations 11 need to consider whether laws in other jurisdictions may apply to them Customers should seek advice to understand the application of relevant laws to their business and operations AWS policy on granting government access AWS is vigilant about customers' security and does not disclose or move data in response to a request from the US or other government unless legally required to do so in order to comply with a legally valid and bindin g order such as a subpoena or a court order or as is otherwise required by applicable law Nongovernmental or regulatory bodies typically must use recognized international processes such as Mutual Legal Assistance Treaties with the US government to obtain valid and binding orders Additionally our practice is to notify customers where practicable before disclosing their content so they can seek protection from disclosure unless we are legally prohibited from doing so or there is clear indication o f illegal conduct in connection with the use of AWS services For additional information see the Law enforcement Information Requests page Privacy and data protection in New Zealand: The Privacy Act This section discusses aspects of the New Zealand Privacy Act 2020 (NZ) (Privacy Act) effective from December 1 2020 The main requirements in the Privacy Act for handling personal information are set out in the Information Privacy Principles (IPPs) The IPPs impose requirements for collecting managing using disclosing and otherwise handling personal information collected from individuals in New Zealand The New Zealand Privacy Commissioner may also issue code s of practice which apply prescribe or modify the application of IPPs in relation to an activity industry or profession (or classes of them) The Privacy Act recognizes a distinction between “principals ” and “agents ” Where an entity (the agent ) holds personal information for the sole purpose of storing or processing personal information on behalf of another entity (the principal ) and does not use or disclose the personal information for its own purposes the information is deemed to be held by the principal In those circumstances primary responsibility for compliance with the IPPs will rest with the principal Amazon Web Services Using AWS in the Context of New Zealand P rivacy Considerations 12 AWS appreciates that its services are used in many different contexts for different business purposes and that there may be multiple parties i nvolved in the data lifecycle of personal information included in customer content stored or processed using AWS services For simplicity the guidance included in the table below assumes that in the context of the customer content stored or processed usi ng the AWS services the customer: • Collects personal information from its end users and determines the purpose for which the customer requires and will use the information • Has the capacity to control who can access update and use the personal information • Manages the relationship with the individual about whom the personal information relates including by communicating with the individual as required to comply with any re levant disclosure and consent requirements • Transfers the content into the AWS Region it selects AWS does not receive customer content in New Zealand Customers may in fact work with or rely on third parties to discharge these responsibilities but the cu stomer rather than AWS would manage its relationships with those third parties We summarize in the following table the IPP requirements that are particularly important for customers to consider if using AWS to store personal information collected from individuals in New Zealand We also discuss aspects of the AWS services relevant to these IPPs Table 1 — IPP requirements and considerations IPP Summary of IPP requirements Considerations IPP 1 – Purpose of collection of personal information Personal information may be collected only for lawful and necessary purposes Customer — The customer determines and controls when how and why it collects personal information from individuals and decides whether it will include that personal informatio n in IPP 2 – Source of personal information Persona l information may only be collected directly from the individual unless an exception applies Amazon Web Services Using AWS in the Context of New Zealand Privacy Considerations 13 IPP 3 – Collection of Information Reasonable steps must be taken to ensure that when an individual’s personal information is collected they are aware of the purposes for which it is collected and certain other matters customer content it stores or processes using AWS services The customer may also need to ensure it discloses the purposes for which it collects personal information to the relevant individuals ; obtains the personal information from a permitted source ; and that it only uses the personal information for a permitted purpose As between the customer and AWS the customer has a relationship with the individuals whose personal information the custom er stores or processes on AWS and therefore the customer is able to communicate directly with them about collection of their personal information The customer rather than AWS will also know the scope of any notifications given to or consents obtained by the customer from such individuals relating to the collection of their personal information AWS — AWS does not know when a customer chooses to upload to AWS content that may contain personal information AWS also does not collect personal informatio n from individuals whose personal information is included in content a customer stores or processes using the AWS services and AWS has no IPP 4 – Manner of collection of personal information Personal information may only be collected fairly and in a lawful and non intrusive manner Amazon Web Services Using AWS in the Context of New Zealand Privacy C onsiderations 14 contact with those individuals Therefore AWS is not required and is unable in the circumstances to communicate with the relevant individuals AWS only accesses or uses customer content as necessary to provide the AWS services and does not access or use customer content for any other purpose without the customer’s consent IPP 5 – Storage and security of personal information Reasonable steps must be taken to protect the security of personal information Customer — Customers are responsible for security in the cloud including security of their content (and personal information included in their content) AWS — AWS is responsible for managing the security of the underlying cloud environment For a complete list of all the security measures built into the core AWS Cloud infrastructure and services see Best Practices for Security Identity & Compliance IPP 6 – Access to personal information Individuals are entitled to access personal information about them unless an exception applies Customer — Customers are responsible for their content in the cloud When a customer chooses to store or process content containing personal information using the AWS services the customer has control over the quality of that content and the customer retains access to and can correct it IPP 7 – Correction of personal information Individuals may request correction of personal information about them Amazon Web Services Using AWS in the Context of New Zealand Privacy Considerations 15 In addition as between the customer and AWS the customer has a relationship with the individuals whose personal information is included in customer content stored or processed using the AWS services Therefore the customer rather than AWS is able to work with relevant individuals to provide them access to and the ability to correct their personal information AWS — AWS uses customer content to provide the AWS services selected by each customer to that customer and does not us e customer content for other purposes without the customer’s consent AWS has no contact with the individuals whose personal information is included in content a customer stores or processes using the AWS services Given this and the level of control cust omers enjoy over customer content AWS is not required and is unable in the circumstances to provide such individuals with access to or the ability to correct their personal information IPP 8 Accuracy to be checked before use or disclosure Reasonable steps must be taken to check accuracy completeness and relevance of personal information before it is used or disclosed Customer — When a customer chooses to store or process content containing personal information using the AWS services the customer has control over the quality of that content and the customer retains access to and can Amazon Web Services Using AWS in the Context of New Zealand Privacy Considera tions 16 correct it This means th at the customer must take all required steps to ensure that personal information included in customer content is accurate complete not misleading and kept up to date AWS — AWS does not collect personal information from individuals whose personal inform ation is included in content a customer stores or processes using the AWS services and AWS has no contact with those individuals Given this and the level of control customers enjoy over customer content AWS is not required and is unable in the circumstances to confirm the accuracy completeness and relevance of personal information before it is used or disclosed IPP 9 Personal information must not be kept longer than necessary Personal information should not be kept for longer than is required for the purposes for which the information may be lawfully used Customer — Because only the customer knows the purposes for collecting the personal information contained in the customer content it stores or processes using AWS services the custo mer is responsible for ensuring that such personal information is not kept for longer than required The customer should delete the personal information when it is no longer needed AWS — AWS services provide the customer with controls to enable the customer to delete content Amazon Web Services Using AWS in the Context of New Zealand Privacy Considerations 17 stored on AWS as described in AWS documentation IPP 10 Limits on use of personal information Personal information may only be used or disclosed for the purpose for which it was collected for reasonable directly related purposes in a way which does not identify the individual or if another exception applies Customer — Given that the customer determines the purpose for collecting personal information and controls the use and disclosure of content that contains personal information the customer is responsible for ensuring how such personal information is used or disclosed The customer also controls the format structure and security of its content stored or processed using A WS services AWS — AWS uses customer content to provide the AWS services selected by each customer to that customer and does not use customer content for other purposes without the customer’s consent General — AWS services are structured such that custome rs maintain ownership and control of their content when using the AWS services regardless of which AWS Region they use IPP 11 Limits on disclosure of personal information IPP 12 – Disclosure of personal information outside New Zealand Personal information may only be disclosed outside of New Zealand if the recipient is subject to similar safeguards to those under the Privacy Act Customer — The customer can choose the AWS Region or Regions in which their content will be located and can choose to deploy their AWS services exclusively in a single AWS Region if preferred AWS services are structured so that a customer maintains effective control of customer content regardless of what AWS Region they Amazon Web Services Using AWS in the Context of New Zealand Privacy Considerations 18 use for their content The customer shoul d consider whether it should disclose to individuals the locations in which it stores or processes their personal information and obtain any required consents relating to such locations from the relevant individuals if necessary As between the customer an d AWS the customer has a relationship with the individuals whose personal information is included in customer content stored or processed using the AWS services and therefore the customer is able to communicate directly with them about such matters AWS — AWS only stores and processes each customer’s content in the AWS Region(s) and using the services chosen by that customer and otherwise will not move customer content without that customer’s consent except as legally required If a customer chooses to store content in more than one AWS Region or copy or move content between AWS Regions that is solely the customer’s choice and the customer will continue to maintain effective control of its content wherever it is stored and processed General — It is important to highlight that an entity is only required to comply with IPP 12 when that entity discloses personal information to an overseas person or entity The Privacy Act states that where an agency (Entity A) Amazon Web Services Using AWS in the Context of New Zealand Privacy Considerations 19 Privacy breaches Given that customers maintain control of their content when using AWS customers retain the responsibility to monitor their own environment for privacy breaches and to notify regulators and affected individuals as required under applicable law Only the customer is able to manage this responsibility holds information as an agent for anoth er agency (Entity B) for example for safe custody or processing then (i) the personal information is to be treated as being held by Entity B and not Entity A (ii) the transfer of the information to Entity A by Entity B is not a use or disclosure of th e information by Entity B and (iii) the transfer of the information and any information derived from the processing of that information to Entity B by Entity A is not a use or disclosure of the information by Entity A It also does not matter whether Entity A is outside New Zealand or holds the information outside New Zealand Using the AWS services to store or process personal information outside New Zealand at the choice of the customer may not be a disclosure of customer content Customers should seek legal advice regarding this if they feel it may be relevant to the way they propose to use the AWS services Amazon Web Services Using AWS in the Context of New Zealand Privacy Considerations 20 A customer’s AWS access keys can be used as an example to help explain why the customer rather than AWS is best placed to manage this responsibility Customers control access keys and determine who is authorized to access their AWS account AWS does not have visibility of access keys or who is and who is not authorized to log into an account Therefore the customer is responsible for monitoring use misuse distribution or loss of access keys The Privacy Act introduced a notifiable privacy breach scheme that is effective from December 1 2020 The scheme aims to give affected individuals the opportunity to take steps to protect their personal information following a privacy breach that has caused or is likely to cause serious harm AWS offers two types of New Zealand Notifiable Data Breaches ( NZNDB ) Addend a to customers who are subject to the Privacy Act and are using AWS to store and process personal information covered by the scheme The NZNDB Addend a address customers’ need for notification if a security event affects their data The first ty pe the Account NZNDB Addendum applies only to the specific individual account that accepts the Account NZNDB Addendum The Account NZNDB Addendum must be separately accepted for each AWS account that a customer requires to be covered The second type th e Organizations NZNDB Addendum once accepted by a management account in AWS Organizations applies to the management account and all member accounts in that AWS Organization If a customer does not need or want to take advantage of the Organizations NZNDB Addendum they can still accept the Account NZNDB Addendum for individual accounts AWS has made both types of NZNDB Addendum available online as click through agreements in AWS Artifact (the customer facing audit and compliance portal that can be accessed from the AWS management console) In AWS Artifact customers can review and activate the relevant NZNDB Addendum for those AWS accounts they use to store and process personal information covered by t he scheme NZNDB Addend a frequently asked questions are available online at AWS Artifacts FAQs Considerations This whitepaper does not discuss other New Zealand privacy laws aside from the Privacy Act that may also be relevant to customers including state based laws and industry specific requirements The relevant privacy and data protection laws and regulations applicable to individual customers will depend on several factors including where a customer conducts business the industry in which it operates the type of Amazon Web Services Using AWS in the Context of New Zealand Privacy Considerations 21 content they want to store where or from whom the content originates and where the content will be stored Customers concerned about their New Zealand privacy regulatory obl igations should first ensure they identify and understand the requirements applying to them and seek appropriate advice At AWS security is always our top priority We deliver services to millions of active customers including enterprises educational i nstitutions and government agencies in over 190 countries Our customers include financial services providers and healthcare providers and we are trusted with some of their most sensitive information AWS services are designed to give customers flexibilit y over how they configure and deploy their solutions as well as control over their content including where it is stored how it is stored and who has access to it AWS customers can build their own secure applications and store content securely on AWS Further reading To help customers further understand how they can address their privacy and data protection requirements customers are encouraged to read the risk compliance and security whitepapers best practices checklists and guidance published on t he AWS website This material can be found at AWS Compliance and AWS Cloud Security As of the date of publication specific whitepapers about privacy and da ta protection considerations are also available for the following countries or regions : • Australia • California • Germany • Hong Kong • Japan • Malaysia • Singapore • Philippines • Using AWS in the Context of Common Privacy & Data Protection Considera tions Amazon Web Services Using AWS in the Context of New Zealand Privacy Considerations 22 AWS Artifact Customers can review and download reports and details about more than 2500 security controls by using AWS Artifact the automated compliance reporting portal available in the AWS Manageme nt Console The AWS Artifact portal provides on demand access to AWS security and compliance documents including the NZNDB Addend a and certifications from accreditation bodies across geographies and compliance verticals AWS also offers training to help c ustomers learn how to design develop and operate available efficient and secure applications on the AWS Cloud and gain proficiency with AWS services and solutions We offer free instructional videos selfpaced labs and instructor led classes For more information on AWS training see AWS Training and Certification AWS certifications certify the technical skills and knowledge associated with the best practices for building secure and reliable cloud based applications using AWS technology For more information on AWS certifications see AWS Certification If you require further information please contact AWS or contact your local AWS account representative Document revisions Date Description August 17 2021 Updated for technical accuracy November 2020 Fifth publication May 2018 Fourth publication December 2016 Third publication January 2016 Second publication September 2014 First publication Amazon Web Services Using AWS in the Context of New Zealand Privacy Considerations 23 Notes 1 https://awsamazoncom/compliance/soc faqs/ 2 http://d0awsstaticcom/whitepapers/compliance/soc3_amazon_web_servicespdf 3 http://awsamazoncom /compliance/iso 27001 faqs/ 4 http://awsamazoncom/compliance/iso 27017 faqs/ 5 http://awsamazoncom/compliance/iso 27018 faqs/ 6 https://awsamazoncom/compliance/iso 9001 faqs/ 7 https://awsamazoncom/compliance/pci dsslevel1faqs/ 8 AWS GovCloud (US) is an isolated AWS Region designed to allow US government agencies and customers to move sensitive workloads into the cloud by addr essing their specific regulatory and compliance requirements AWS China (Beijing) and AWS China (Ningxia) are also isolated AWS Region s Customers who want to use the AWS China (Beijing) and AWS China (Ningxia) Region s are required to sign up for a separat e set of account credentials unique to the China (Beijing) and China (Ningxia) Region s 9 For a real time location map see https://awsamazoncom/about aws/global infrastructure/
General
CSA_Consensus_Assessments_Initiative_Questionnaire
CSA Consensus Assessments Initiative Questionnaire (CAIQ) May 2022 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2022 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 4 CSA Consensus Assessments Initiative Questionnaire 5 Further Reading 100 Document Revisions 100 Abstract The CSA Consensus Assessments Initiative Questionnaire provides a set of questions the CSA anticipates a cloud consumer and/or a cloud auditor would ask of a cloud provider It provides a series of security control and process questions which can then be used for a wide range of uses including cloud provider selection and security evaluation AWS has completed this questionnaire with the answers below The questionnaire has been completed using the current CSA CAIQ standard v402 (06072021 Update) Introduction The Cloud Security Alliance (CSA) is a “notforprofit organization with a mission to promote the use of best practices for providing security assurance within Cloud Computing and to provide education on the uses of Cloud Computing to help secure all other forms of computing” For more information see https://cloudsecurityallianceorg/about/ A wide range of industry security practitioners corporations and associations participate in this organization to achieve its mission CSA Consensus Assessments Initiative Questionnaire Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title A&A 011 Are audit and assurance policies procedures and standards established documented approved communicated applied evaluated and maintained? Yes CSPowned AWS has established formal policies and procedures to provide employees a common baseline for information security standards and guidance The AWS Information Security Management System policy establishes guidelines for protecting the confidentiality integrity and availability of customers’ systems and content Maintaining customer trust and confidence is of the utmost importance to AWS AWS works to comply with applicable federal state and local laws statutes ordinances and regulations concerning security privacy and data protection of AWS services in order to minimize the risk of accidental or unauthorized access or disclosure of customer content A&A01 Establish document approve communicate apply evaluate and maintain audit and assurance policies and procedures and standards Review and update the policies and procedures at least annually Audit and Assurance Policy and Procedures Audit & Assurance A&A 012 Are audit and assurance policies procedures and standards reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis A&A01 Establish document approve communicate apply evaluate and maintain audit and assurance policies and procedures and standards Review and update the policies and procedures at least annually Audit and Assurance Policy and Procedures Audit & Assurance Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title A&A 021 Are independent audit and assurance assessments conducted according to relevant standards at least annually? Yes CSPowned AWS has established a formal audit program that includes continual independent internal and external assessments to validate the implementation and operating effectiveness of the AWS control environment Internal and external audits are planned and performed according to a documented audit schedule to review the continued performance of AWS against standardsbased criteria like the ISO/IEC 27001 and to identify improvement opportunities Compliance reports from these assessments are made available to customers enabling them to evaluate AWS You can access assessments in AWS Artifact: https://awsamazoncom/artifact The AWS Compliance reports identify the scope of AWS services and regions assessed as well the assessor’s attestation of compliance Customers can perform vendor or supplier evaluations by leveraging these reports and certifications A&A02 Conduct independent audit and assurance assessments according to relevant standards at least annually Independent Assessments Audit & Assurance A&A 031 Are independent audit and assurance assessments performed according to risk based plans and policies? Yes CSPowned AWS internal and external audit and assurance uses riskbased plans and approach to conduct assessments at least annually AWS Compliance program covers sections including but not limited to assessment methodology security assessment and results and nonconforming controls A&A03 Perform independent audit and assurance assessments according to riskbased plans and policies Risk Based Planning Assessment Audit & Assurance A&A 041 Is compliance verified regarding all relevant standards regulations legal/contractual and statutory requirements applicable to the audit? Yes CSPowned AWS maintains Security Governance Risk and Compliance relationships with internal and external parties to verity monitor legal regulatory and contractual requirements Should a new security directive be issued AWS has documented plans in place to implement that directive with designated timeframes A&A04 Verify compliance with all relevant standards regulations legal/contractual and statutory requirements applicable to the audit Requirement s Compliance Audit & Assurance Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title A&A 051 Is an audit management process defined and implemented to support audit planning risk analysis security control assessments conclusions remediation schedules report generation and reviews of past reports and supporting evidence? Yes CSPowned Internal and external audits are planned and performed according to the documented audit scheduled to review the continued performance of AWS against standardsbased criteria and to identify general improvement opportunities Standardsbased criteria includes but is not limited to the ISO/IEC 27001 Federal Risk and Authorization Management Program (FedRAMP) the American Institute of Certified Public Accountants (AICPA): AT 801 (formerly Statement on Standards for Attestation Engagements [SSAE] 16) and the International Standards for Assurance Engagements No3402 (ISAE 3402) professional standards A&A05 Define and implement an Audit Management process to support audit planning risk analysis security control assessment conclusion remediation schedules report generation and review of past reports and supporting evidence Audit Management Process Audit & Assurance A&A 061 Is a riskbased corrective action plan to remediate audit findings established documented approved communicated applied evaluated and maintained? Yes CSPowned In alignment with ISO 27001 AWS maintains a Risk Management program to mitigate and manage risk AWS management has a strategic business plan which includes risk identification and the implementation of controls to mitigate or manage risks AWS management re evaluates the strategic business plan at least biannually This process requires management to identify risks within its areas of responsibility and to implement appropriate measures designed to address those risks A&A06 Establish document approve communicate apply evaluate and maintain a riskbased corrective action plan to remediate audit findings review and report remediation status to relevant stakeholders Remediation Audit & Assurance Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title A&A 062 Is the remediation status of audit findings reviewed and reported to relevant stakeholders? Yes CSPowned AWS has established a formal audit program that includes continual independent internal and external assessments to validate the implementation and operating effectiveness of the AWS control environment Internal and external audits are planned and performed according to a documented audit schedule to review the continued performance of AWS against standardsbased criteria like the ISO/IEC 27001 and to identify improvement opportunities External audits are planned and performed according to a documented audit schedule to review the continued performance of AWS against standardsbased criteria and to identify improvement opportunities Standardsbased criteria include but are not limited to Federal Risk and Authorization Management Program (FedRAMP) the American Institute of Certified Public Accountants (AICPA): AT 801 (formerly Statement on Standards for Attestation Engagements [SSAE] 18) the International Standards for Assurance Engagements No3402 (ISAE 3402) professional standards and the Payment Card Industry Data Security standard PCI DSS 321 Compliance reports from these assessments are made available to customers enabling them to evaluate AWS You can access assessments in AWS Artifact: https://awsamazoncom/artifact The AWS Compliance reports identify the scope of AWS services and regions assessed as well the assessor’s attestation of compliance Customers can perform vendor or supplier evaluations by leveraging these reports and certifications A&A06 Establish document approve communicate apply evaluate and maintain a riskbased corrective action plan to remediate audit findings review and report remediation status to relevant stakeholders Remediation Audit & Assurance Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title AIS 011 Are application security policies and procedures established documented approved communicated applied evaluated and maintained to guide appropriate planning delivery and support of the organization's application security capabilities? Yes CSPowned AWS has established formal policies and procedures to provide employees a common baseline for information security standards and guidance The AWS Information Security Management System policy establishes guidelines for protecting the confidentiality integrity and availability of customers’ systems and content Maintaining customer trust and confidence is of the utmost importance to AWS AWS works to comply with applicable federal state and local laws statutes ordinances and regulations concerning security privacy and data protection of AWS services in order to minimize the risk of accidental or unauthorized access or disclosure of customer content AIS01 Establish document approve communicate apply evaluate and maintain policies and procedures for application security to provide guidance to the appropriate planning delivery and support of the organization's application security capabilities Review and update the policies and procedures at least annually Application and Interface Security Policy and Procedures Application & Interface Security AIS 012 Are application security policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis AIS01 Establish document approve communicate apply evaluate and maintain policies and procedures for application security to provide guidance to the appropriate planning delivery and support of the organization's application security capabilities Review and update the policies and procedures at least annually Application and Interface Security Policy and Procedures Application & Interface Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title AIS 021 Are baseline requirements to secure different applications established documented and maintained? Yes CSPowned AWS maintains a systematic approach to planning and developing new services for the AWS environment to ensure the quality and security requirements are met with each release The design of new services or any significant changes to current services follow secure software development practices and are controlled through a project management system with multidisciplinary participation Prior to launch each of the following requirements must be reviewed: • Security Risk Assessment • Threat modeling • Security design reviews • Secure code reviews • Security testing • Vulnerability/penetration testing AIS02 Establish document and maintain baseline requirements for securing different applications Application Security Baseline Requirement s Application & Interface Security AIS 031 Are technical and operational metrics defined and implemented according to business objectives security requirements and compliance obligations? Yes CSCowned See response to Question ID AIS021 AIS03 Define and implement technical and operational metrics in alignment with business objectives security requirements and compliance obligations Application Security Metrics Application & Interface Security AIS 041 Is an SDLC process defined and implemented for application design development deployment and operation per organizationally designed security requirements? Yes CSPowned See response to Question ID AIS021 AIS04 Define and implement a SDLC process for application design development deployment and operation in accordance with security requirements defined by the organization Secure Application Design and Developmen t Application & Interface Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title AIS 051 Does the testing strategy outline criteria to accept new information systems upgrades and new versions while ensuring application security compliance adherence and organizational speed of delivery goals? Yes CSPowned See response to Question ID AIS021 AIS05 Implement a testing strategy including criteria for acceptance of new information systems upgrades and new versions which provides application security assurance and maintains compliance while enabling organizational speed of delivery goals Automate when applicable and possible Automated Application Security Testing Application & Interface Security AIS 052 Is testing automated when applicable and possible? Yes CSPowned Where appropriate a continuous deployment methodology is conducted to ensure changes are automatically built tested and pushed to production with the goal of eliminating as many manual steps as possible Continuous deployment seeks to eliminate the manual nature of this process and automate each step allowing service teams to standardize the process and increase the efficiency with which they deploy code In continuous deployment an entire release process is a "pipeline" containing "stages” AIS05 Implement a testing strategy including criteria for acceptance of new information systems upgrades and new versions which provides application security assurance and maintains compliance while enabling organizational speed of delivery goals Automate when applicable and possible Automated Application Security Testing Application & Interface Security AIS 061 Are strategies and capabilities established and implemented to deploy application code in a secure standardized and compliant manner? Yes CSPowned Where appropriate a continuous deployment methodology is conducted to ensure changes are automatically built tested and pushed to production with the goal of eliminating as many manual steps as possible Continuous deployment seeks to eliminate the manual nature of this process and automate each step allowing service teams to standardize the process and increase the efficiency with which they deploy code In continuous deployment an entire release process is a "pipeline" containing "stages” AIS06 Establish and implement strategies and capabilities for secure standardized and compliant application deployment Automate where possible Automated Secure Application Deployment Application & Interface Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title AIS 062 Is the deployment and integration of application code automated where possible? Yes CSPowned Automated code analysis tools are run as a part of the AWS Software Development Lifecycle and all deployed software undergoes recurring penetration testing performed by carefully selected industry experts Our security risk assessment reviews begin during the design phase and the engagement lasts through launch to ongoing operations Refer to the AWS Overview of Security Processes for further details That whitepaper is located here https://d1awsstaticcom/whitepapers/Security /AWS_Security_Whitepaperpdf AIS06 Establish and implement strategies and capabilities for secure standardized and compliant application deployment Automate where possible Automated Secure Application Deployment Application & Interface Security AIS 071 Are application security vulnerabilities remediated following defined processes? Yes CSPowned Static code analysis tools are run as a part of the standard build process and all deployed software undergoes recurring penetration testing performed by carefully selected industry experts Our security risk assessment reviews begin during the design phase and the engagement lasts through launch to ongoing operations Refer to the Best Practices for Security Identity & Compliance website for further details https://awsamazoncom/architecture/security identitycompliance/?cardsallsort by=itemadditionalFieldssortDate&cards allsortorder=desc&awsfcontent type=*all&awsfmethodology=*all AIS07 Define and implement a process to remediate application security vulnerabilities automating remediation when possible Application Vulnerability Remediation Application & Interface Security AIS 072 Is the remediation of application security vulnerabilities automated when possible? Yes CSPowned Automated code analysis tools are run as a part of the AWS Software Development Lifecycle and all deployed software undergoes recurring penetration testing performed by carefully selected industry experts Our security risk assessment reviews begin during the design phase and the engagement lasts through launch to ongoing operations Refer to the Best Practices for Security Identity & Compliance website for further details https://awsamazoncom/architecture/security identitycompliance/?cardsallsort by=itemadditionalFieldssortDate&cards allsortorder=desc&awsfcontent type=*all&awsfmethodology=*all AIS07 Define and implement a process to remediate application security vulnerabilities automating remediation when possible Application Vulnerability Remediation Application & Interface Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title BCR 011 Are business continuity management and operational resilience policies and procedures established documented approved communicated applied evaluated and maintained? Yes CSPowned The AWS business continuity policy is designed to ensure minimum outage time and maximum effectiveness of the recovery and reconstitution efforts which include • Activation and Notification • Recovery and • Reconstitution Phase AWS business continuity mechanisms are designed to ensure minimum outage time and maximum effectiveness of the recovery and reconstitution efforts AWS resiliency encompasses the processes and procedures to identify respond to and recover from a major event or incident within our environment BCR01 Establish document approve communicate apply evaluate and maintain business continuity management and operational resilience policies and procedures Review and update the policies and procedures at least annually Business Continuity Management Policy and Procedures Business Continuity Management and Operational Resilience BCR 012 Are the policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis BCR01 Establish document approve communicate apply evaluate and maintain business continuity management and operational resilience policies and procedures Review and update the policies and procedures at least annually Business Continuity Management Policy and Procedures Business Continuity Management and Operational Resilience BCR 021 Are criteria for developing business continuity and operational resiliency strategies and capabilities established based on business disruption and risk impacts? Yes Shared CSP and CSC AWS Business Continuity Policies and Plans have been developed and tested in alignment with ISO 27001 standards Refer to ISO 27001 standard annex A domain 17 for further details on AWS and business continuity BCR02 Determine the impact of business disruptions and risks to establish criteria for developing business continuity and operational resilience strategies and capabilities Risk Assessment and Impact Analysis Business Continuity Management and Operational Resilience BCR 031 Are strategies developed to reduce the impact of withstand and recover from business disruptions in accordance with risk appetite? Yes Shared CSP and CSC AWS Business Continuity Policies and Plans have been developed and tested in alignment with ISO 27001 standards Refer to ISO 27001 standard annex A domain 17 for further details on AWS and business continuity BCR03 Establish strategies to reduce the impact of withstand and recover from business disruptions within risk appetite Business Continuity Strategy Business Continuity Management and Operational Resilience Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title BCR 041 Are operational resilience strategies and capability results incorporated to establish document approve communicate apply evaluate and maintain a business continuity plan? Yes Shared CSP and CSC AWS Business Continuity Policies and Plans have been developed and tested in alignment with ISO 27001 standards Refer to ISO 27001 standard annex A domain 17 for further details on AWS and business continuity BCR04 Establish document approve communicate apply evaluate and maintain a business continuity plan based on the results of the operational resilience strategies and capabilities Business Continuity Planning Business Continuity Management and Operational Resilience BCR 051 Is relevant documentation developed identified and acquired to support business continuity and operational resilience plans? Yes CSPowned The AWS business continuity plan details the threephased approach that AWS has developed to recover and reconstitute the AWS infrastructure: • Activation and Notification Phase • Recovery Phase • Reconstitution Phase This approach ensures that AWS performs system recovery and reconstitution efforts in a methodical sequence maximizing the effectiveness of the recovery and reconstitution efforts and minimizing system outage time due to errors and omissions BCR05 Develop identify and acquire documentation that is relevant to support the business continuity and operational resilience programs Make the documentation available to authorized stakeholders and review periodically Documentati on Business Continuity Management and Operational Resilience BCR 052 Is business continuity and operational resilience documentation available to authorized stakeholders? Yes CSPowned Information System Documentation is made available internally to AWS personnel through the use of Amazon's Intranet site Refer to ISO 27001 Appendix A Domain 12 BCR05 Develop identify and acquire documentation that is relevant to support the business continuity and operational resilience programs Make the documentation available to authorized stakeholders and review periodically Documentati on Business Continuity Management and Operational Resilience BCR 053 Is business continuity and operational resilience documentation reviewed periodically? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis BCR05 Develop identify and acquire documentation that is relevant to support the business continuity and operational resilience programs Make the documentation available to authorized stakeholders and review periodically Documentati on Business Continuity Management and Operational Resilience Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title BCR 061 Are the business continuity and operational resilience plans exercised and tested at least annually and when significant changes occur? Yes CSPowned AWS Business Continuity Policies and Plans have been developed and tested at least annually in alignment with ISO 27001 standards Refer to ISO 27001 standard annex A domain 17 for further details on AWS and business continuity at least annually BCR06 Exercise and test business continuity and operational resilience plans at least annually or upon significant changes Business Continuity Exercises Business Continuity Management and Operational Resilience BCR 071 Do business continuity and resilience procedures establish communication with stakeholders and participants? Yes CSPowned The AWS Business Continuity policy provides a complete discussion of AWS services roles and responsibilities and AWS processes for managing an outage from detection to deactivation AWS Service teams create administrator documentation for their services and store the documents in internal AWS document repositories Using these documents teams provide initial training to new team members that covers their job duties oncall responsibilities service specific monitoring metrics and alarms along with the intricacies of the service they are supporting Once trained service team members can assume oncall duties and be paged into an engagement as a resolver In addition to the documentation stored in the repository AWS also uses GameDay Exercises to train coordinators and Service Teams in their roles and responsibilities BCR07 Establish communication with stakeholders and participants in the course of business continuity and resilience procedures Communicat ion Business Continuity Management and Operational Resilience Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title BCR 081 Is cloud data periodically backed up? Yes Shared CSP and CSC AWS maintains a retention policy applicable to AWS internal data and system components in order to continue operations of AWS business and services Critical AWS system components including audit evidence and logging records are replicated across multiple Availability Zones and backups are maintained and monitored Customers retain control and ownership of their content When customers store content in a specific region it is not replicated outside that region It is the customer's responsibility to replicate content across regions if business needs require that Backup and retention policies are the responsibility of the customer AWS offers best practice resources to customers including guidance and alignment to the Well Architected Framework Snapshots are AWS objects to which IAM users groups and roles can be assigned permissions so that only authorized users can access Amazon backups AWS Backup allows customers to centrally manage and automate backups across AWS services The service enables customers to centralize and automate data protection across AWS services For additional details refer to https://awsamazoncom /backup BCR08 Periodically backup data stored in the cloud Ensure the confidentiality integrity and availability of the backup and verify data restoration from backup for resiliency Backup Business Continuity Management and Operational Resilience BCR 082 Is the confidentiality integrity and availability of backup data ensured? Yes Shared CSP and CSC See response to Question ID BCR081 BCR08 Periodically backup data stored in the cloud Ensure the confidentiality integrity and availability of the backup and verify data restoration from backup for resiliency Backup Business Continuity Management and Operational Resilience Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title BCR 083 Can backups be restored appropriately for resiliency? Yes CSCowned AWS Backup allows customers to centrally manage and automate backups across AWS services For additional details refer to https://awsamazoncom /backup BCR08 Periodically backup data stored in the cloud Ensure the confidentiality integrity and availability of the backup and verify data restoration from backup for resiliency Backup Business Continuity Management and Operational Resilience BCR 091 Is a disaster response plan established documented approved applied evaluated and maintained to ensure recovery from natural and manmade disasters? Yes Shared CSP and CSC The AWS business continuity policy is designed to ensure minimum outage time and maximum effectiveness of the recovery and reconstitution efforts which include • Activation and Notification • Recovery and • Reconstitution Phase AWS business continuity mechanisms are designed to ensure minimum outage time and maximum effectiveness of the recovery and reconstitution efforts AWS resiliency encompasses the processes and procedures to identify respond to and recover from a major event or incident within our environment AWS maintains a ubiquitous security control environment across its infrastructure Each data center is built to physical environmental and security standards in an activeactive configuration employing an n+1 redundancy model to ensure system availability in the event of component failure Components (N) have at least one independent backup component (+1) so the backup component is active in the operation even if other components are fully functional In order to eliminate single points of failure this model is applied throughout AWS including network and data center implementation Data centers are online and serving traffic; no data center is “cold” In case of failure there is sufficient capacity to enable traffic to be loadbalanced to the remaining sites AWS provides customers with the capability to implement a robust continuity plan including the utilization of frequent server instance backups data redundancy replication and the flexibility to place instances and store data within multiple geographic regions as well as across multiple Availability Zones within each region Customers are responsible for properly implementing contingency planning training and testing for their systems hosted on AWS BCR09 Establish document approve communicate apply evaluate and maintain a disaster response plan to recover from natural and man made disasters Update the plan at least annually or upon significant changes Disaster Response Plan Business Continuity Management and Operational Resilience Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title BCR 092 Is the disaster response plan updated at least annually and when significant changes occur? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis BCR09 Establish document approve communicate apply evaluate and maintain a disaster response plan to recover from natural and man made disasters Update the plan at least annually or upon significant changes Disaster Response Plan Business Continuity Management and Operational Resilience BCR 101 Is the disaster response plan exercised annually or when significant changes occur? Yes CSPowned AWS tests the business continuity at least annually to ensure effectiveness of the associated procedures and the organization readiness Testing consists of gameday exercises that execute on activities that would be performed in an actual outage AWS documents the results including lessons learned and any corrective actions that were completed BCR10 Exercise the disaster response plan annually or upon significant changes including if possible local emergency authorities Response Plan Exercise Business Continuity Management and Operational Resilience BCR 102 Are local emergency authorities included if possible in the exercise? No CSPowned BCR10 Exercise the disaster response plan annually or upon significant changes including if possible local emergency authorities Response Plan Exercise Business Continuity Management and Operational Resilience BCR 111 Is businesscritical equipment supplemented with redundant equipment independently located at a reasonable minimum distance in accordance with applicable industry standards? Yes CSPowned AWS maintains a ubiquitous security control environment across its infrastructure Each data center is built to physical environmental and security standards in an activeactive configuration employing an n+1 redundancy model to ensure system availability in the event of component failure Components (N) have at least one independent backup component (+1) so the backup component is active in the operation even if other components are fully functional In order to eliminate single points of failure this model is applied throughout AWS including network and data center implementation Data centers are online and serving traffic; no data center is “cold” In case of failure there is sufficient capacity to enable traffic to be loadbalanced to the remaining sites BCR11 Supplement business critical equipment with redundant equipment independently located at a reasonable minimum distance in accordance with applicable industry standards Equipment Redundancy Business Continuity Management and Operational Resilience Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CCC 011 Are risk management policies and procedures associated with changing organizational assets including applications systems infrastructure configuration etc established documented approved communicated applied evaluated and maintained (regardless of whether asset management is internal or external)? Yes CSPowned AWS applies a systematic approach to managing change to ensure that all changes to a production environment are reviewed tested and approved The AWS Change Management approach requires that the following steps be complete before a change is deployed to the production environment: 1 Document and communicate the change via the appropriate AWS change management tool 2 Plan implementation of the change and rollback procedures to minimize disruption 3 Test the change in a logically segregated nonproduction environment 4 Complete a peerreview of the change with a focus on business impact and technical rigor The review should include a code review 5 Attain approval for the change by an authorized individual Where appropriate a continuous deployment methodology is conducted to ensure changes are automatically built tested and pushed to production with the goal of eliminating as many manual steps as possible Continuous deployment seeks to eliminate the manual nature of this process and automate each step allowing service teams to standardize the process and increase the efficiency with which they deploy code In continuous deployment an entire release process is a "pipeline" containing "stages” CCC01 Establish document approve communicate apply evaluate and maintain policies and procedures for managing the risks associated with applying changes to organization assets including application systems infrastructure configuration etc regardless of whether the assets are managed internally or externally (ie outsourced) Review and update the policies and procedures at least annually Change Management Policy and Procedures Change Control and Configuration Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CCC 012 Are the policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis CCC01 Establish document approve communicate apply evaluate and maintain policies and procedures for managing the risks associated with applying changes to organization assets including application systems infrastructure configuration etc regardless of whether the assets are managed internally or externally (ie outsourced) Review and update the policies and procedures at least annually Change Management Policy and Procedures Change Control and Configuration Management CCC 021 Is a defined quality change control approval and testing process (with established baselines testing and release standards) followed? Yes CSPowned See response to Question ID CCC011 CCC02 Follow a defined quality change control approval and testing process with established baselines testing and release standards Quality Testing Change Control and Configuration Management CCC 031 Are risks associated with changing organizational assets (including applications systems infrastructure configuration etc) managed regardless of whether asset management occurs internally or externally (ie outsourced)? Yes CSPowned See response to Question ID CCC011 CCC03 Manage the risks associated with applying changes to organization assets including application systems infrastructure configuration etc regardless of whether the assets are managed internally or externally (ie outsourced) Change Management Technology Change Control and Configuration Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CCC 041 Is the unauthorized addition removal update and management of organization assets restricted? Yes CSPowned Authorized staff must pass two factor authentication a minimum of two times to access data center floors Physical access points to server locations are recorded by closed circuit television camera (CCTV) as defined in the AWS Data Center Physical Security Policy CCC04 Restrict the unauthorized addition removal update and management of organization assets Unauthorize d Change Protection Change Control and Configuration Management CCC 051 Are provisions to limit changes that directly impact CSCowned environments and require tenants to authorize requests explicitly included within the service level agreements (SLAs) between CSPs and CSCs? No CSPowned AWS notifies customers of changes to the AWS service offering in accordance with the commitment set forth in the AWS Customer Agreement AWS continuously evolves and improves our existing services and frequently adds new services Our services are controlled using APIs If we change or discontinue any API used to make calls to the services we will continue to offer the existing API for 12 months Additionally AWS maintains a public Service Health Dashboard to provide customers with the realtime operational status of our services at http://statusawsamazoncom/ CCC05 Include provisions limiting changes directly impacting CSCs owned environments/tenant s to explicitly authorized requests within service level agreements between CSPs and CSCs Change Agreements Change Control and Configuration Management CCC 061 Are change management baselines established for all relevant authorized changes on organizational assets? Yes CSPowned See response to Question ID CCC011 CCC06 Establish change management baselines for all relevant authorized changes on organization assets Change Management Baseline Change Control and Configuration Management CCC 071 Are detection measures implemented with proactive notification if changes deviate from established baselines? Yes CSPowned See response to Question ID CCC081 CCC07 Implement detection measures with proactive notification in case of changes deviating from the established baseline Detection of Baseline Deviation Change Control and Configuration Management CCC 081 Is a procedure implemented to manage exceptions including emergencies in the change and configuration process? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis CCC08 'Implement a procedure for the management of exceptions including emergencies in the change and configuration process Align the procedure with the requirements of GRC04: Policy Exception Process' Exception Management Change Control and Configuration Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CCC 082 'Is the procedure aligned with the requirements of the GRC04: Policy Exception Process?' Yes CSPowned See response to Question ID CCC081 CCC08 'Implement a procedure for the management of exceptions including emergencies in the change and configuration process Align the procedure with the requirements of GRC04: Policy Exception Process' Exception Management Change Control and Configuration Management CCC 091 Is a process to proactively roll back changes to a previously known "good state" defined and implemented in case of errors or security concerns? Yes CSPowned See response to Question ID CCC011 CCC09 Define and implement a process to proactively roll back changes to a previous known good state in case of errors or security concerns Change Restoration Change Control and Configuration Management CEK 011 Are cryptography encryption and key management policies and procedures established documented approved communicated applied evaluated and maintained? Yes Shared CSP and CSC Internally AWS establishes and manages cryptographic keys for required cryptography employed within the AWS infrastructure AWS produces controls and distributes symmetric cryptographic keys using NIST approved key management technology and processes in the AWS information system An AWS developed secure key and credential manager is used to create protect and distribute symmetric keys AWS credentials needed on hosts RSA public/private keys and X509 Certifications AWS customers are responsible for managing encryption keys within their AWS environments Customers can leverage AWS services such as AWS KMS and CloudHSM to manage the lifecycle of their keys according to internal policy requirements See following: AWS KMS https://awsamazoncom /kms/ AWS CloudHSM https://awsamazoncom /cloudhsm/ CEK01 Establish document approve communicate apply evaluate and maintain policies and procedures for Cryptography Encryption and Key Management Review and update the policies and procedures at least annually Encryption and Key Management Policy and Procedures Cryptography Encryption & Key Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CEK 012 Are cryptography encryption and key management policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis CEK01 Establish document approve communicate apply evaluate and maintain policies and procedures for Cryptography Encryption and Key Management Review and update the policies and procedures at least annually Encryption and Key Management Policy and Procedures Cryptography Encryption & Key Management CEK 021 Are cryptography encryption and key management roles and responsibilities defined and implemented? Yes CSCowned See response to CEK011 CEK02 Define and implement cryptographic encryption and key management roles and responsibilities CEK Roles and Responsibiliti es Cryptography Encryption & Key Management CEK 031 Are data atrest and intransit cryptographically protected using cryptographic libraries certified to approved standards? NA CSCowned AWS allows customers to use their own encryption mechanisms (for storage and intransit) for nearly all the services including S3 EBS and EC2 IPSec tunnels to VPC are also encrypted In addition customers can leverage AWS Key Management Systems (KMS) to create and control encryption keys (refer to https://awsamazoncom /kms/) Refer to AWS SOC reports for more details on KMS Refer to AWS: Overview of Security Processes Whitepaper for additional details available at: http://awsamazoncom/ security/security learning/ CEK03 Provide cryptographic protection to data atrest and intransit using cryptographic libraries certified to approved standards Data Encryption Cryptography Encryption & Key Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CEK 041 Are appropriate data protection encryption algorithms used that consider data classification associated risks and encryption technology usability? NA CSCowned This is a customer responsibility AWS customers are responsible for the management of the data they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure CEK04 Use encryption algorithms that are appropriate for data protection considering the classification of data associated risks and usability of the encryption technology Encryption Algorithm Cryptography Encryption & Key Management CEK 051 Are standard change management procedures established to review approve implement and communicate cryptography encryption and key management technology changes that accommodate internal and external sources? Yes Shared CSP and CSC See response to CEK011 AWS customers are responsible for managing encryption keys within their AWS environments according to their internal policy requirements CEK05 Establish a standard change management procedure to accommodate changes from internal and external sources for review approval implementation and communication of cryptographic encryption and key management technology changes Encryption Change Management Cryptography Encryption & Key Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CEK 061 Are changes to cryptography encryption and key management related systems policies and procedures managed and adopted in a manner that fully accounts for downstream effects of proposed changes including residual risk cost and benefits analysis? Yes Shared CSP and CSC See response to CEK011 AWS allows customers to use their own encryption mechanisms for nearly all the services including S3 EBS and EC2 IPSec tunnels to VPC are also encrypted In addition customers can leverage AWS Key Management Systems (KMS) to create and control encryption keys (refer to https://awsamazoncom /kms/) Refer to AWS SOC reports for more details on KMS Refer to AWS: Overview of Security Processes Whitepaper for additional details available at: http://awsamazoncom/ security/security learning/ CEK06 Manage and adopt changes to cryptography encryption and key managementrelated systems (including policies and procedures) that fully account for downstream effects of proposed changes including residual risk cost and benefits analysis Encryption Change Cost Benefit Analysis Cryptography Encryption & Key Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CEK 071 Is a cryptography encryption and key management risk program established and maintained that includes risk assessment risk treatment risk context monitoring and feedback provisions? Yes CSPowned AWS has established an information security management program with designated roles and responsibilities that are appropriately aligned within the organization AWS management reviews and evaluates the risks identified in the risk management program at least annually The risk management program encompasses the following phases: Discovery – The discovery phase includes listing out risks (threats and vulnerabilities) that exist in the environment This phase provides a basis for all other risk management activities Research – The research phase considers the potential impact(s) of identified risks to the business and its likelihood of occurrence and includes an evaluation of internal control effectiveness Evaluate – The evaluate phase includes ensuring controls processes and other physical and virtual safeguards in place to prevent and detect identified and assessed risks Resolve – The resolve phase results in risk reports provided to managers with the data they need to make effective business decisions and to comply with internal policies and applicable regulations Monitor – The monitor phase includes performing monitoring activities to evaluate whether processes initiatives functions and/or activities are mitigating the risk as designed CEK07 Establish and maintain an encryption and key management risk program that includes provisions for risk assessment risk treatment risk context monitoring and feedback Encryption Risk Management Cryptography Encryption & Key Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CEK 081 Are CSPs providing CSCs with the capacity to manage their own data encryption keys? Yes CSCowned AWS allows customers to use their own encryption mechanisms for nearly all the services including S3 EBS and EC2 IPSec tunnels to VPC are also encrypted In addition customers can leverage AWS Key Management Systems (KMS) to create and control encryption keys (refer to https://awsamazoncom /kms/) Refer to AWS SOC reports for more details on KMS In addition refer to AWS Cloud Security Whitepaper for additional details available at http://awsamazoncom/ security CEK08 CSPs must provide the capability for CSCs to manage their own data encryption keys CSC Key Management Capability Cryptography Encryption & Key Management CEK 091 Are encryption and key management systems policies and processes audited with a frequency proportional to the system's risk exposure and after any security event? Yes CSPowned AWS has established a formal periodic audit program that includes continual independent internal and external assessments to validate the implementation and operating effectiveness of the AWS control environment CEK09 Audit encryption and key management systems policies and processes with a frequency that is proportional to the risk exposure of the system with audit occurring preferably continuously but at least annually and after any security event(s) Encryption and Key Management Audit Cryptography Encryption & Key Management CEK 092 Are encryption and key management systems policies and processes audited (preferably continuously but at least annually)? Yes CSPowned AWS has established a formal periodic audit program that includes continual independent internal and external assessments to validate the implementation and operating effectiveness of the AWS control environment CEK09 Audit encryption and key management systems policies and processes with a frequency that is proportional to the risk exposure of the system with audit occurring preferably continuously but at least annually and after any security event(s) Encryption and Key Management Audit Cryptography Encryption & Key Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CEK 101 Are cryptographic keys generated using industry accepted and approved cryptographic libraries that specify algorithm strength and random number generator specifications? Yes Shared CSP and CSC AWS allows customers to use their own encryption mechanisms for nearly all the services including S3 EBS and EC2 In addition customers can leverage AWS Key Management Systems (KMS) to create and control encryption keys (refer to https://awsamazoncom/kms/) Refer to AWS SOC reports for more details on KMS AWS establishes and manages cryptographic keys for required cryptography employed within the AWS infrastructure AWS produces controls and distributes symmetric cryptographic keys using NIST approved key management technology and processes in the AWS information system An AWS developed secure key and credential manager is used to create protect and distribute symmetric keys and is used to secure and distribute: AWS credentials needed on hosts RSA public/private keys and X509 Certifications AWS cryptographic processes are reviewed by independent thirdparty auditors for our continued compliance with SOC PCI DSS and ISO 27001 AWS customers are responsible for managing encryption keys within their AWS environments according to their internal policy requirements CEK10 Generate Cryptographic keys using industry accepted cryptographic libraries specifying the algorithm strength and the random number generator used Key Generation Cryptography Encryption & Key Management CEK 111 Are private keys provisioned for a unique purpose managed and is cryptography secret? NA CSCowned Customers determine whether they want to leverage AWS KMS to store encryption keys in the cloud or use other mechanisms (onprem HSM other key management technologies) to store keys within their on premises environments CEK11 Manage cryptographic secret and private keys that are provisioned for a unique purpose Key Purpose Cryptography Encryption & Key Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CEK 121 Are cryptographic keys rotated based on a cryptoperiod calculated while considering information disclosure risks and legal and regulatory requirements? NA CSCowned AWS allows customers to use their own encryption mechanisms for nearly all the services including S3 EBS and EC2 IPSec tunnels to VPC are also encrypted In addition customers can leverage AWS Key Management Systems (KMS) to create and control encryption keys (refer to https://awsamazoncom /kms/) Refer to AWS SOC reports for more details on KMS In addition refer to AWS Cloud Security Whitepaper for additional details available at http://awsamazoncom/ security CEK12 Rotate cryptographic keys in accordance with the calculated cryptoperiod which includes provisions for considering the risk of information disclosure and legal and regulatory requirements Key Rotation Cryptography Encryption & Key Management CEK 131 Are cryptographic keys revoked and removed before the end of the established cryptoperiod (when a key is compromised or an entity is no longer part of the organization) per defined implemented and evaluated processes procedures and technical measures to include legal and regulatory requirement provisions? NA CSCowned AWS allows customers to use their own encryption mechanisms for nearly all the services including S3 EBS and EC2 IPSec tunnels to VPC are also encrypted In addition customers can leverage AWS Key Management Systems (KMS) to create and control encryption keys (refer to https://awsamazoncom /kms/) Refer to AWS SOC reports for more details on KMS In addition refer to AWS Cloud Security Whitepaper for additional details available at http://awsamazoncom/ security CEK13 Define implement and evaluate processes procedures and technical measures to revoke and remove cryptographic keys prior to the end of its established cryptoperiod when a key is compromised or an entity is no longer part of the organization which include provisions for legal and regulatory requirements Key Revocation Cryptography Encryption & Key Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CEK 141 Are processes procedures and technical measures to destroy unneeded keys defined implemented and evaluated to address key destruction outside secure environments revocation of keys stored in hardware security modules (HSMs) and include applicable legal and regulatory requirement provisions? NA CSCowned AWS allows customers to use their own encryption mechanisms for nearly all the services including S3 EBS and EC2 IPSec tunnels to VPC are also encrypted In addition customers can leverage AWS Key Management Systems (KMS) to create and control encryption keys (refer to https://awsamazoncom /kms/) Refer to AWS SOC reports for more details on KMS In addition refer to AWS Cloud Security Whitepaper for additional details available at http://awsamazoncom/ security CEK14 Define implement and evaluate processes procedures and technical measures to destroy keys stored outside a secure environment and revoke keys stored in Hardware Security Modules (HSMs) when they are no longer needed which include provisions for legal and regulatory requirements Key Destruction Cryptography Encryption & Key Management CEK 151 Are processes procedures and technical measures to create keys in a preactivated state (ie when they have been generated but not authorized for use) being defined implemented and evaluated to include legal and regulatory requirement provisions? NA CSCowned AWS allows customers to use their own encryption mechanisms for nearly all the services including S3 EBS and EC2 IPSec tunnels to VPC are also encrypted In addition customers can leverage AWS Key Management Systems (KMS) to create and control encryption keys (refer to https://awsamazoncom /kms/) Refer to AWS SOC reports for more details on KMS In addition refer to AWS Cloud Security Whitepaper for additional details available at http://awsamazoncom/ security CEK15 Define implement and evaluate processes procedures and technical measures to create keys in a pre activated state when they have been generated but not authorized for use which include provisions for legal and regulatory requirements Key Activation Cryptography Encryption & Key Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CEK 161 Are processes procedures and technical measures to monitor review and approve key transitions (eg from any state to/from suspension) being defined implemented and evaluated to include legal and regulatory requirement provisions? NA CSCowned AWS allows customers to use their own encryption mechanisms for nearly all the services including S3 EBS and EC2 IPSec tunnels to VPC are also encrypted In addition customers can leverage AWS Key Management Systems (KMS) to create and control encryption keys (refer to https://awsamazoncom /kms/) Refer to AWS SOC reports for more details on KMS In addition refer to AWS Cloud Security Whitepaper for additional details available at http://awsamazoncom/ security CEK16 Define implement and evaluate processes procedures and technical measures to monitor review and approve key transitions from any state to/from suspension which include provisions for legal and regulatory requirements Key Suspension Cryptography Encryption & Key Management CEK 171 Are processes procedures and technical measures to deactivate keys (at the time of their expiration date) being defined implemented and evaluated to include legal and regulatory requirement provisions? NA CSCowned AWS allows customers to use their own encryption mechanisms for nearly all the services including S3 EBS and EC2 IPSec tunnels to VPC are also encrypted In addition customers can leverage AWS Key Management Systems (KMS) to create and control encryption keys (refer to https://awsamazoncom /kms/) Refer to AWS SOC reports for more details on KMS In addition refer to AWS Cloud Security Whitepaper for additional details available at http://awsamazoncom/ security CEK17 Define implement and evaluate processes procedures and technical measures to deactivate keys at the time of their expiration date which include provisions for legal and regulatory requirements Key Deactivation Cryptography Encryption & Key Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CEK 181 Are processes procedures and technical measures to manage archived keys in a secure repository (requiring least privilege access) being defined implemented and evaluated to include legal and regulatory requirement provisions? NA CSCowned AWS allows customers to use their own encryption mechanisms for nearly all the services including S3 EBS and EC2 IPSec tunnels to VPC are also encrypted In addition customers can leverage AWS Key Management Systems (KMS) to create and control encryption keys (refer to https://awsamazoncom /kms/) Refer to AWS SOC reports for more details on KMS In addition refer to AWS Cloud Security Whitepaper for additional details available at http://awsamazoncom/ security CEK18 Define implement and evaluate processes procedures and technical measures to manage archived keys in a secure repository requiring least privilege access which include provisions for legal and regulatory requirements Key Archival Cryptography Encryption & Key Management CEK 191 Are processes procedures and technical measures to encrypt information in specific scenarios (eg only in controlled circumstances and thereafter only for data decryption and never for encryption) being defined implemented and evaluated to include legal and regulatory requirement provisions? NA CSCowned This is a customer responsibility AWS customers are responsible for the management of the data they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure CEK19 Define implement and evaluate processes procedures and technical measures to use compromised keys to encrypt information only in controlled circumstance and thereafter exclusively for decrypting data and never for encrypting data which include provisions for legal and regulatory requirements Key Compromise Cryptography Encryption & Key Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CEK 201 Are processes procedures and technical measures to assess operational continuity risks (versus the risk of losing control of keying material and exposing protected data) being defined implemented and evaluated to include legal and regulatory requirement provisions? Yes Shared CSP and CSC AWS establishes and manages cryptographic keys for required cryptography employed within the AWS infrastructure AWS produces controls and distributes symmetric cryptographic keys using NIST approved key management technology and processes in the AWS information system An AWS developed secure key and credential manager is used to create protect and distribute symmetric keys and is used to secure and distribute: AWS credentials needed on hosts RSA public/private keys and X509 Certifications AWS cryptographic processes are reviewed by independent thirdparty auditors for our continued compliance with SOC PCI DSS and ISO 27001 AWS allows customers to use their own encryption mechanisms for nearly all the services including S3 EBS and EC2 In addition customers can leverage AWS Key Management Systems (KMS) to create and control encryption keys (refer to https://awsamazoncom /kms/) Refer to AWS SOC reports for more details on KMS CEK20 Define implement and evaluate processes procedures and technical measures to assess the risk to operational continuity versus the risk of the keying material and the information it protects being exposed if control of the keying material is lost which include provisions for legal and regulatory requirements Key Recovery Cryptography Encryption & Key Management CEK 211 Are key management system processes procedures and technical measures being defined implemented and evaluated to track and report all cryptographic materials and status changes that include legal and regulatory requirements provisions? NA CSCowned AWS allows customers to use their own encryption mechanisms for nearly all the services including S3 EBS and EC2 IPSec tunnels to VPC are also encrypted In addition customers can leverage AWS Key Management Systems (KMS) to create and control encryption keys (refer to https://awsamazoncom /kms/) Refer to AWS SOC reports for more details on KMS In addition refer to AWS Cloud Security Whitepaper for additional details available at http://awsamazoncom/ security CEK21 Define implement and evaluate processes procedures and technical measures in order for the key management system to track and report all cryptographic materials and changes in status which include provisions for legal and regulatory requirements Key Inventory Management Cryptography Encryption & Key Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DCS 011 Are policies and procedures for the secure disposal of equipment used outside the organization's premises established documented approved communicated enforced and maintained? Yes CSPowned Environments used for the delivery of the AWS services are managed by authorized personnel and are located in an AWS managed data centers Media handling controls for the data centers are managed by AWS in alignment with the AWS Media Protection Policy This policy includes procedures around access marking storage transporting and sanitation Live media transported outside of data center secure zones is escorted by authorized personnel DCS01 Establish docum ent approve communicate apply evaluate and maintain policies and procedures for the secure disposal of equipment used outside the organization's premises If the equipment is not physically destroyed a data destruction procedure that renders recovery of information impossible must be applied Review and update the policies and procedures at least annually OffSite Equipment Disposal Policy and Procedures Datacenter Security DCS 012 Is a data destruction procedure applied that renders information recovery information impossible if equipment is not physically destroyed? Yes CSPowned When a storage device has reached the end of its useful life AWS procedures include a decommissioning process that is designed to prevent customer data from being exposed to unauthorized individuals AWS uses the techniques detailed in NIST 80088 (“Guidelines for Media Sanitization”) as part of the decommissioning process Refer to AWS: Overview of Security Processes Whitepaper for additional details available at: http://awsamazoncom/security/security learning/ DCS01 Establish document approve communicate apply evaluate and maintain policies and procedures for the secure disposal of equipment used outside the organization's premises If the equipment is not physically destroyed a data destruction procedure that renders recovery of information impossible must be applied Review and update the policies and procedures at least annually OffSite Equipment Disposal Policy and Procedures Datacenter Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DCS 013 Are policies and procedures for the secure disposal of equipment used outside the organization's premises reviewed and updated at least annually? Yes Policies are reviewed approved by AWS leadership at least annually or as needed basis DCS01 Establish document approve communicate apply evaluate and maintain policies and procedures for the secure disposal of equipment used outside the organization's premises If the equipment is not physically destroyed a data destruction procedure that renders recovery of information impossible must be applied Review and update the policies and procedures at least annually OffSite Equipment Disposal Policy and Procedures Datacenter Security DCS 021 Are policies and procedures for the relocation or transfer of hardware software or data/information to an offsite or alternate location established documented approved communicated implemented enforced maintained? Yes AWS has established formal policies and procedures to provide employees a common baseline for information security standards and guidance The AWS Information Security Management System policy establishes guidelines for protecting the confidentiality integrity and availability of customers’ systems and content Maintaining customer trust and confidence is of the utmost importance to AWS AWS works to comply with applicable federal state and local laws statutes ordinances and regulations concerning security privacy and data protection of AWS services in order to minimize the risk of accidental or unauthorized access or disclosure of customer content DCS02 Establish document approve communicate apply evaluate and maintain policies and procedures for the relocation or transfer of hardware software or data/information to an offsite or alternate location The relocation or transfer request requires the written or cryptographically verifiable authorization Review and update the policies and procedures at least annually OffSite Transfer Authorizatio n Policy and Procedures Datacenter Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DCS 022 Does a relocation or transfer request require written or cryptographically verifiable authorization? Yes Environments used for the delivery of the AWS services are managed by authorized personnel and are located in an AWS managed data centers Media handling controls for the data centers are managed by AWS in alignment with the AWS Media Protection Policy This policy includes procedures around access marking storage transporting and sanitation Live media transported outside of data center secure zones is escorted by authorized personnel DCS03 Establish document approve communicate apply evaluate and maintain policies and procedures for the relocation or transfer of hardware software or data/information to an offsite or alternate location The relocation or transfer request requires the written or cryptographically verifiable authorization Review and update the policies and procedures at least annually OffSite Transfer Authorizatio n Policy and Procedures Datacenter Security DCS 023 Are policies and procedures for the relocation or transfer of hardware software or data/information to an offsite or alternate location reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis DCS04 Establish document approve communicate apply evaluate and maintain policies and procedures for the relocation or transfer of hardware software or data/information to an offsite or alternate location The relocation or transfer request requires the written or cryptographically verifiable authorization Review and update the policies and procedures at least annually OffSite Transfer Authorizatio n Policy and Procedures Datacenter Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DCS 031 Are policies and procedures for maintaining a safe and secure working environment (in offices rooms and facilities) established documented approved communicated enforced and maintained? Yes CSPowned AWS engages with external certifying bodies and independent auditors to review and validate our compliance with compliance frameworks AWS SOC reports provide additional details on the specific physical security control activities executed by AWS Refer to ISO 27001 standards; Annex A domain 11 for additional details AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard DCS03 Establish document approve communicate apply evaluate and maintain policies and procedures for maintaining a safe and secure working environment in offices rooms and facilities Review and update the policies and procedures at least annually Secure Area Policy and Procedures Datacenter Security DCS 032 Are policies and procedures for maintaining safe secure working environments (eg offices rooms) reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis DCS03 Establish document approve communicate apply evaluate and maintain policies and procedures for maintaining a safe and secure working environment in offices rooms and facilities Review and update the policies and procedures at least annually Secure Area Policy and Procedures Datacenter Security DCS 041 Are policies and procedures for the secure transportation of physical media established documented approved communicated enforced evaluated and maintained? Yes CSPowned Environments used for the delivery of the AWS services are managed by authorized personnel and are located in an AWS managed data centers Media handling controls for the data centers are managed by AWS in alignment with the AWS Media Protection Policy This policy includes procedures around access marking storage transporting and sanitation Live media transported outside of data center secure zones is escorted by authorized personnel DCS04 Establish document approve communicate apply evaluate and maintain policies and procedures for the secure transportation of physical media Review and update the policies and procedures at least annually Secure Media Transportati on Policy and Procedures Datacenter Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DCS 042 Are policies and procedures for the secure transportation of physical media reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis DCS04 Establish document approve communicate apply evaluate and maintain policies and procedures for the secure transportation of physical media Review and update the policies and procedures at least annually Secure Media Transportati on Policy and Procedures Datacenter Security DCS 051 Is the classification and documentation of physical and logical assets based on the organizational business risk? Yes CSPowned In alignment with ISO 27001 standards AWS assets are assigned an owner tracked and monitored by the AWS personnel with AWS proprietary inventory management tools DCS05 Classify and document the physical and logical assets (eg applications) based on the organizational business risk Assets Classification Datacenter Security DCS 061 Are all relevant physical and logical assets at all CSP sites cataloged and tracked within a secured system? Yes CSPowned In alignment with ISO 27001 standards AWS Hardware assets are assigned an owner tracked and monitored by the AWS personnel with AWS proprietary inventory management tools DCS06 Catalogue and track all relevant physical and logical assets located at all of the CSP's sites within a secured system Assets Cataloguing and Tracking Datacenter Security DCS 071 Are physical security perimeters implemented to safeguard personnel data and information systems? Yes CSPowned Physical security controls include but are not limited to perimeter controls such as fencing walls security staff video surveillance intrusion detection systems and other electronic means Authorized staff must pass twofactor authentication a minimum of two times to access data center floors The AWS SOC reports provide additional details on the specific control activities executed by AWS Refer to ISO 27001 standards; Annex A domain 11 for further information AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard For more information on the design layout and operations of our data centers please visit this site: AWS Data Center Overview DCS07 Implement physical security perimeters to safeguard personnel data and information systems Establish physical security perimeters between the administrative and business areas and the data storage and processing facilities areas Controlled Access Points Datacenter Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DCS 072 Are physical security perimeters established between administrative and business areas data storage and processing facilities? Yes CSPowned Physical security controls include but are not limited to perimeter controls such as fencing walls security staff video surveillance intrusion detection systems and other electronic means Authorized staff must pass twofactor authentication a minimum of two times to access data center floors The AWS SOC reports provide additional details on the specific control activities executed by AWS Refer to ISO 27001 standards; Annex A domain 11 for further information AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard For more information on the design layout and operations of our data centers please visit this site: AWS Data Center Overview DCS07 Implement physical security perimeters to safeguard personnel data and information systems Establish physical security perimeters between the administrative and business areas and the data storage and processing facilities areas Controlled Access Points Datacenter Security DCS 081 Is equipment identification used as a method for connection authentication? Yes CSPowned AWS manages equipment identification in alignment with ISO 27001 standard AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard DCS08 Use equipment identification as a method for connection authentication Equipment Identification Datacenter Security DCS 091 Are solely authorized personnel able to access secure areas with all ingress and egress areas restricted documented and monitored by physical access control mechanisms? Yes CSPowned Physical access is strictly controlled both at the perimeter and at building ingress points and includes but is not limited to professional security staff utilizing video surveillance intrusion detection systems and other electronic means Authorized staff must pass twofactor authentication a minimum of two times to access data center floors Physical access points to server locations are recorded by closed circuit television camera (CCTV) as defined in the AWS Data Center Physical Security Policy DCS09 Allow only authorized personnel access to secure areas with all ingress and egress points restricted documented and monitored by physical access control mechanisms Retain access control records on a periodic basis as deemed appropriate by the organization Secure Area Authorizatio n Datacenter Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DCS 092 Are access control records retained periodically as deemed appropriate by the organization? Yes CSPowned Authentication logging aggregates sensitive logs from EC2 hosts and stores them on S3 The log integrity checker inspects logs to ensure they were uploaded to S3 unchanged by comparing them with local manifest files Access and privileged command auditing logs record every automated and interactive login to the systems as well as every privileged command executed External access to data stored in Amazon S3 is logged and the logs are retained for at least 90 days including relevant access request information such as the data accessor IP address object and operation DCS09 Allow only authorized personnel access to secure areas with all ingress and egress points restricted documented and monitored by physical access control mechanisms Retain access control records on a periodic basis as deemed appropriate by the organization Secure Area Authorizatio n Datacenter Security DCS 101 Are external perimeter datacenter surveillance systems and surveillance systems at all ingress and egress points implemented maintained and operated? Yes CSPowned Physical access is strictly controlled both at the perimeter and at building ingress points and includes but is not limited to professional security staff utilizing video surveillance intrusion detection systems and other electronic means Authorized staff must pass twofactor authentication a minimum of two times to access data center floors Physical access points to server locations are recorded by closed circuit television camera (CCTV) as defined in the AWS Data Center Physical Security Policy DCS10 Implement maintain and operate datacenter surveillance systems at the external perimeter and at all the ingress and egress points to detect unauthorized ingress and egress attempts Surveillance System Datacenter Security DCS 111 Are datacenter personnel trained to respond to unauthorized access or egress attempts? Yes CSPowned Physical access is strictly controlled both at the perimeter and at building ingress points and includes but is not limited to professional security staff utilizing video surveillance intrusion detection systems and other electronic means Authorized staff must pass twofactor authentication a minimum of two times to access data center floors Physical access points to server locations are recorded by closed circuit television camera (CCTV) as defined in the AWS Data Center Physical Security Policy DCS11 Train datacenter personnel to respond to unauthorized ingress or egress attempts Unauthorize d Access Response Training Datacenter Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DCS 121 Are processes procedures and technical measures defined implemented and evaluated to ensure riskbased protection of power and telecommunicatio n cables from interception interference or damage threats at all facilities offices and rooms? Yes CSPowned AWS equipment is protected from utility service outages in alignment with ISO 27001 standard AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard AWS SOC reports provide additional details on controls in place to minimize the effect of a malfunction or physical disaster to the computer and data center facilities DCS12 Define implement and evaluate processes procedures and technical measures that ensure a riskbased protection of power and telecommunication cables from a threat of interception interference or damage at all facilities offices and rooms Cabling Security Datacenter Security DCS 131 Are data center environmental control systems designed to monitor maintain and test that on site temperature and humidity conditions fall within accepted industry standards effectively implemented and maintained? Yes CSPowned AWS data centers incorporate physical protection against environmental risks AWS' physical protection against environmental risks has been validated by an independent auditor and has been certified as being in alignment with ISO 27002 best practices Refer to ISO 27001 standard Annex A domain 11 and link below for Data center controls overview: https://awsamazoncom/compliance/data center/controls/ DCS13 Implement and maintain data center environmental control systems that monitor maintain and test for continual effectiveness the temperature and humidity conditions within accepted industry standards Environment al Systems Datacenter Security DCS 141 Are utility services secured monitored maintained and tested at planned intervals for continual effectiveness? Yes CSPowned AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard AWS SOC reports provide additional details on controls in place to minimize the effect of a malfunction or physical disaster to the computer and data center facilities Please refer to link below for Data center controls overview: https://awsamazoncom/compliance/data center/controls/ DCS14 Secure monitor maintain and test utilities services for continual effectiveness at planned intervals Secure Utilities Datacenter Security DCS 151 Is businesscritical equipment segregated from locations subject to a high probability of environmental risk events? Yes CSPowned The AWS Security Operations Center performs quarterly threat and vulnerability reviews of datacenters and colocation sites These reviews are in addition to an initial environmental and geographic assessment of a site performed prior to building or leasing The quarterly reviews are validated by third parties during our SOC PCI and ISO assessments DCS15 Keep businesscritical equipment away from locations subject to high probability for environmental risk events Equipment Location Datacenter Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DSP 011 Are policies and procedures established documented approved communicated enforced evaluated and maintained for the classification protection and handling of data throughout its lifecycle according to all applicable laws and regulations standards and risk level? Yes CSPowned AWS has implemented data handling and classification requirements which provide specifications around: • Data encryption • Content in transit and during storage • Access • Retention • Physical controls • Mobile devices • Handling requirements AWS services are content agnostic in that they offer the same high level of security to customers regardless of the type of content being stored We are vigilant about our customers' security and have implemented sophisticated technical and physical measures against unauthorized access AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure DSP01 Establish document approve communicate apply evaluate and maintain policies and procedures for the classification protection and handling of data throughout its lifecycle and according to all applicable laws and regulations standards and risk level Review and update the policies and procedures at least annually Security and Privacy Policy and Procedures Data Security and Privacy Lifecycle Management DSP 012 Are data security and privacy policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis DSP01 Establish document approve communicate apply evaluate and maintain policies and procedures for the classification protection and handling of data throughout its lifecycle and according to all applicable laws and regulations standards and risk level Review and update the policies and procedures at least annually Security and Privacy Policy and Procedures Data Security and Privacy Lifecycle Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DSP 021 Are industry accepted methods applied for secure data disposal from storage media so information is not recoverable by any forensic means? Yes CSPowned When a storage device has reached the end of its useful life AWS procedures include a decommissioning process that is designed to prevent customer data from being exposed to unauthorized individuals AWS uses the techniques detailed in NIST 80088 (“Guidelines for Media Sanitization”) as part of the decommissioning process Refer to AWS: Overview of Security Processes Whitepaper for additional details available at: http://awsamazoncom/security/security learning/ DSP02 Apply industry accepted methods for the secure disposal of data from storage media such that data is not recoverable by any forensic means Secure Disposal Data Security and Privacy Lifecycle Management DSP 031 Is a data inventory created and maintained for sensitive and personal information (at a minimum)? NA CSCowned This is a customer responsibility AWS customers are responsible for the management of the data they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure DSP03 Create and maintain a data inventory at least for any sensitive data and personal data Data Inventory Data Security and Privacy Lifecycle Management DSP 041 Is data classified according to type and sensitivity levels? NA CSCowned This is a customer responsibility AWS customers are responsible for the management of the data they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure DSP04 Classify data according to its type and sensitivity level Data Classification Data Security and Privacy Lifecycle Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DSP 051 Is data flow documentation created to identify what data is processed and where it is stored and transmitted? NA CSCowned This is a customer responsibility AWS customers are responsible for the management of the data they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure DSP05 Create data flow documentation to identify what data is processed stored or transmitted where Review data flow documentation at defined intervals at least annually and after any change Data Flow Documentati on Data Security and Privacy Lifecycle Management DSP 052 Is data flow documentation reviewed at defined intervals at least annually and after any change? NA CSCowned This is a customer responsibility AWS customers are responsible for the management of the data they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure DSP05 Create data flow documentation to identify what data is processed stored or transmitted where Review data flow documentation at defined intervals at least annually and after any change Data Flow Documentati on Data Security and Privacy Lifecycle Management DSP 061 Is the ownership and stewardship of all relevant personal and sensitive data documented? NA CSCowned This is a customer responsibility AWS customers are responsible for the management of the data they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure DSP06 Document ownership and stewardship of all relevant documented personal and sensitive data Perform review at least annually Data Ownership and Stewardship Data Security and Privacy Lifecycle Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DSP 062 Is data ownership and stewardship documentation reviewed at least annually? NA CSCowned This is a customer responsibility AWS customers are responsible for the management of the data they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure DSP06 Document ownership and stewardship of all relevant documented personal and sensitive data Perform review at least annually Data Ownership and Stewardship Data Security and Privacy Lifecycle Management DSP 071 Are systems products and business practices based on security principles by design and per industry best practices? Yes CSPowned AWS maintains a systematic approach to planning and developing new services for the AWS environment to ensure the quality and security requirements are met with each release The design of new services or any significant changes to current services follow secure software development practices and are controlled through a project management system with multidisciplinary participation Prior to launch each of the following requirements must be reviewed: • Security Risk Assessment • Threat modeling • Security design reviews • Secure code reviews • Security testing • Vulnerability/penetration testing DSP07 Develop systems products and business practices based upon a principle of security by design and industry best practices Data Protection by Design and Default Data Security and Privacy Lifecycle Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DSP 081 Are systems products and business practices based on privacy principles by design and according to industry best practices? NA CSCowned This is a customer responsibility AWS customers are responsible for the management of the data they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure DSP08 Develop systems products and business practices based upon a principle of privacy by design and industry best practices Ensure that systems' privacy settings are configured by default according to all applicable laws and regulations Data Privacy by Design and Default Data Security and Privacy Lifecycle Management DSP 082 Are systems' privacy settings configured by default and according to all applicable laws and regulations? NA CSCowned This is a customer responsibility AWS customers are responsible to adhere to regulatory requirements in the jurisdictions their business are active in DSP08 Develop systems products and business practices based upon a principle of privacy by design and industry best practices Ensure that systems' privacy settings are configured by default according to all applicable laws and regulations Data Privacy by Design and Default Data Security and Privacy Lifecycle Management DSP 091 Is a data protection impact assessment (DPIA) conducted when processing personal data and evaluating the origin nature particularity and severity of risks according to any applicable laws regulations and industry best practices? NA CSCowned This is a customer responsibility AWS customers are responsible for the management of the data they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure DSP09 Conduct a Data Protection Impact Assessment (DPIA) to evaluate the origin nature particularity and severity of the risks upon the processing of personal data according to any applicable laws regulations and industry best practices Data Protection Impact Assessment Data Security and Privacy Lifecycle Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DSP 101 Are processes procedures and technical measures defined implemented and evaluated to ensure any transfer of personal or sensitive data is protected from unauthorized access and only processed within scope (as permitted by respective laws and regulations)? NA CSCowned This is a customer responsibility AWS customers are responsible for the management of the data they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure DSP10 Define implement and evaluate processes procedures and technical measures that ensure any transfer of personal or sensitive data is protected from unauthorized access and only processed within scope as permitted by the respective laws and regulations Sensitive Data Transfer Data Security and Privacy Lifecycle Management DSP 111 Are processes procedures and technical measures defined implemented and evaluated to enable data subjects to request access to modify or delete personal data (per applicable laws and regulations)? NA CSCowned This is a customer responsibility AWS customers are responsible for the management of the data they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure DSP11 Define and implement processes procedures and technical measures to enable data subjects to request access to modification or deletion of their personal data according to any applicable laws and regulations Personal Data Access Reversal Rectification and Deletion Data Security and Privacy Lifecycle Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DSP 121 Are processes procedures and technical measures defined implemented and evaluated to ensure personal data is processed (per applicable laws and regulations and for the purposes declared to the data subject)? Yes Shared CSP and CSC AWS has established a formal Data Subject Access Request (DSAR) according to General Data Protection Regulation (GDPR) For this they have to call AWS and open a Harbinger ticket by contacting a CS Team Manager who will work with Legal to open a ticket which includes continual independent internal and external assessments to validate the implementation and operating effectiveness of the AWS control environment AWS customers are responsible for the management of the data (including adhering to applicable laws and regulations) they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure DSP12 Define implement and evaluate processes procedures and technical measures to ensure that personal data is processed according to any applicable laws and regulations and for the purposes declared to the data subject Limitation of Purpose in Personal Data Processing Data Security and Privacy Lifecycle Management DSP 131 Are processes procedures and technical measures defined implemented and evaluated for the transfer and sub processing of personal data within the service supply chain (according to any applicable laws and regulations)? NA Note: AWS customers are responsible for the management of the data they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure AWS does not utilize third parties to provide services to customers There are no subcontractors authorized by AWS to access any customerowned content that you upload onto AWS To monitor subcontractor access yearround please refer to https://awsamazoncom/compliance/sub processors/ DSP13 Define implement and evaluate processes procedures and technical measures for the transfer and sub processing of personal data within the service supply chain according to any applicable laws and regulations Personal Data Sub processing Data Security and Privacy Lifecycle Management DSP 141 Are processes procedures and technical measures defined implemented and evaluated to disclose details to the data owner of any personal or sensitive data access by sub processors before processing initiation? NA AWS does not utilize third parties to provide services to customers There are no subcontractors authorized by AWS to access any customerowned content that you upload onto AWS To monitor subcontractor access yearround please refer to https://awsamazoncom/compliance/third partyaccess/ DSP14 Define implement and evaluate processes procedures and technical measures to disclose the details of any personal or sensitive data access by subprocessors to the data owner prior to initiation of that processing Disclosure of Data Sub processors Data Security and Privacy Lifecycle Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DSP 151 Is authorization from data owners obtained and the associated risk managed before replicating or using production data in nonproduction environments? NA Customer data is not used for testing DSP15 Obtain authorization from data owners and manage associated risk before replicating or using production data in non production environments Limitation of Production Data Use Data Security and Privacy Lifecycle Management DSP 161 Do data retention archiving and deletion practices follow business requirements applicable laws and regulations? Yes Shared CSP and CSC AWS maintains a retention policy applicable to AWS internal data and system components in order to continue operations of AWS business and services Critical AWS system components including audit evidence and logging records are replicated across multiple Availability Zones and backups are maintained and monitored AWS customers are responsible for the management of the data they place into AWS services including retention archiving and deletion policies and practices DSP16 Data retention archiving and deletion is managed in accordance with business requirements applicable laws and regulations Data Retention and Deletion Data Security and Privacy Lifecycle Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DSP 171 Are processes procedures and technical measures defined and implemented to protect sensitive data throughout its lifecycle? NA CSCowned Customers control their customer content With AWS customers: • Determine where their customer content will be stored including the type of storage and geographic region of that storage • Customers can replicate and back up their customer content in more than one region and we will not move or replicate customer content outside of the customer's chosen region(s) except as legally required and as necessary to maintain the AWS services and provide them to our customers and their end users • Choose the secured state of their customer content We offer customers strong encryption for customer content in transit or at rest and we provide customers with the option to manage their own encryption keys • Manage access to their customer content and AWS services and resources through users groups permissions and credentials that customers control DSP17 Define and implement processes procedures and technical measures to protect sensitive data throughout it's lifecycle Sensitive Data Protection Data Security and Privacy Lifecycle Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DSP 181 Does the CSP have in place and describe to CSCs the procedure to manage and respond to requests for disclosure of Personal Data by Law Enforcement Authorities according to applicable laws and regulations? Yes CSPowned We are vigilant about our customers' privacy AWS policy prohibits the disclosure of customer content unless we’re required to do so to comply with the law or with a valid and binding order of a governmental or regulatory body Unless we are prohibited from doing so or there is clear indication of illegal conduct in connection with the use of Amazon products or services Amazon notifies customers before disclosing customer content so they can seek protection from disclosure It's also important to point out that our customers can encrypt their customer content and we provide customers with the option to manage their own encryption keys We know transparency matters to our customers so we regularly publish a report about the types and volume of information requests we receive here: https://awsamazoncom/compliance/amazon informationrequests/ DSP18 The CSP must have in place and describe to CSCs the procedure to manage and respond to requests for disclosure of Personal Data by Law Enforcement Authorities according to applicable laws and regulations The CSP must give special attention to the notification procedure to interested CSCs unless otherwise prohibited such as a prohibition under criminal law to preserve confidentiality of a law enforcement investigation Disclosure Notification Data Security and Privacy Lifecycle Management DSP 182 Does the CSP give special attention to the notification procedure to interested CSCs unless otherwise prohibited such as a prohibition under criminal law to preserve confidentiality of a law enforcement investigation? Yes Shared CSP and CSC See response to Question ID DSP181 DSP18 The CSP must have in place and describe to CSCs the procedure to manage and respond to requests for disclosure of Personal Data by Law Enforcement Authorities according to applicable laws and regulations The CSP must give special attention to the notification procedure to interested CSCs unless otherwise prohibited such as a prohibition under criminal law to preserve confidentiality of a law enforcement investigation Disclosure Notification Data Security and Privacy Lifecycle Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DSP 191 Are processes procedures and technical measures defined and implemented to specify and document physical data locations including locales where data is processed or backed up? NA CSCowned This is a customer responsibility Customers manage access to their customer content and AWS services and resources We provide an advanced set of access encryption and logging features to help you do this effectively (such as AWS CloudTrail) We do not access or use customer content for any purpose other than as legally required and for maintaining the AWS services and providing them to our customers and their end users Customers choose the region(s) in which their customer content will be stored We will not move or replicate customer content outside of the customer’s chosen region(s) except as legally required and as necessary to maintain the AWS services and provide them to our customers and their end users Customers choose how their customer content is secured We offer our customers strong encryption for customer content in transit or at rest and we provide customers with the option to manage their own encryption keys DSP19 Define and implement processes procedures and technical measures to specify and document the physical locations of data including any locations in which data is processed or backed up Data Location Data Security and Privacy Lifecycle Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title GRC 011 Are information governance program policies and procedures sponsored by organizational leadership established documented approved communicated applied evaluated and maintained? Yes CSPowned AWS has established formal policies and procedures to provide employees a common baseline for information security standards and guidance The AWS Information Security Management System policy establishes guidelines for protecting the confidentiality integrity and availability of customers’ systems and content Maintaining customer trust and confidence is of the utmost importance to AWS AWS works to comply with applicable federal state and local laws statutes ordinances and regulations concerning security privacy and data protection of AWS services in order to minimize the risk of accidental or unauthorized access or disclosure of customer content GRC01 Establish document approve communicate apply evaluate and maintain policies and procedures for an information governance program which is sponsored by the leadership of the organization Review and update the policies and procedures at least annually Governance Program Policy and Procedures Governance Risk and Compliance GRC 012 Are the policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis GRC01 Establish document approve communicate apply evaluate and maintain policies and procedures for an information governance program which is sponsored by the leadership of the organization Review and update the policies and procedures at least annuall y Governance Program Policy and Procedures Governance Risk and Compliance Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title GRC 021 Is there an established formal documented and leadership sponsored enterprise risk management (ERM) program that includes policies and procedures for identification evaluation ownership treatment and acceptance of cloud security and privacy risks? Yes CSPowned AWS has established an information security management program with designated roles and responsibilities that are appropriately aligned within the organization AWS management reviews and evaluates the risks identified in the risk management program at least annually The risk management program encompasses the following phases: Discovery – The discovery phase includes listing out risks (threats and vulnerabilities) that exist in the environment This phase provides a basis for all other risk management activities Research – The research phase considers the potential impact(s) of identified risks to the business and its likelihood of occurrence and includes an evaluation of internal control effectiveness Evaluate – The evaluate phase includes ensuring controls processes and other physical and virtual safeguards in place to prevent and detect identified and assessed risks Resolve – The resolve phase results in risk reports provided to managers with the data they need to make effective business decisions and to comply with internal policies and applicable regulations Monitor – The monitor phase includes performing monitoring activities to evaluate whether processes initiatives functions and/or activities are mitigating the risk as designed GRC02 Establish a formal documented and leadershipsponsored Enterprise Risk Management (ERM) program that includes policies and procedures for identification evaluation ownership treatment and acceptance of cloud security and privacy risks Risk Management Program Governance Risk and Compliance GRC 031 Are all relevant organizational policies and associated procedures reviewed at least annually or when a substantial organizational change occurs? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis GRC03 Review all relevant organizational policies and associated procedures at least annually or when a substantial change occurs within the organization Organization al Policy Reviews Governance Risk and Compliance GRC 041 Is an approved exception process mandated by the governance program established and followed whenever a deviation from an established policy occurs? Yes CSPowned Management reviews exceptions to security policies to assess and mitigate risks AWS Security maintains a documented procedure describing the policy exception workflow on an internal AWS website Policy exceptions are tracked and maintained with the policy tool and exceptions are approved rejected or denied based on the procedures outlined within the procedure document GRC04 Establish and follow an approved exception process as mandated by the governance program whenever a deviation from an established policy occurs Policy Exception Process Governance Risk and Compliance Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title GRC 051 Has an information security program (including programs of all relevant CCM domains) been developed and implemented? Yes CSPowned AWS has established an information security management program with designated roles and responsibilities that are appropriately aligned within the organization AWS management reviews and evaluates the risks identified in the risk management program at least annually The risk management program encompasses the following phases: Discovery – The discovery phase includes listing out risks (threats and vulnerabilities) that exist in the environment This phase provides a basis for all other risk management activities Research – The research phase considers the potential impact(s) of identified risks to the business and its likelihood of occurrence and includes an evaluation of internal control effectiveness Evaluate – The evaluate phase includes ensuring controls processes and other physical and virtual safeguards in place to prevent and detect identified and assessed risks Resolve – The resolve phase results in risk reports provided to managers with the data they need to make effective business decisions and to comply with internal policies and applicable regulations Monitor – The monitor phase includes performing monitoring activities to evaluate whether processes initiatives functions and/or activities are mitigating the risk as designed GRC05 Develop and implement an Information Security Program which includes programs for all the relevant domains of the CCM Information Security Program Governance Risk and Compliance GRC 061 Are roles and responsibilities for planning implementing operating assessing and improving governance programs defined and documented? Yes CSPowned See response to Question ID GRC051 GRC06 Define and document roles and responsibilities for planning implementing operating assessing and improving governance programs Governance Responsibilit y Model Governance Risk and Compliance Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title GRC 071 Are all relevant standards regulations legal/contractual and statutory requirements applicable to your organization identified and documented? Yes CSPowned AWS documents tracks and monitors its legal regulatory and contractual agreements and obligations In order to do so AWS performs and maintains the following activities: 1) Identifies and evaluates applicable laws and regulations for each of the jurisdictions in which AWS operates 2) Documents and implements controls to help ensure its conformity with statutory regulatory and contractual requirements relevant to AWS 3) Categorizes the sensitivity of information according to the AWS information security policies to help protect from loss destruction falsification unauthorized access and unauthorized release 4) Informs and continually trains personnel that must be made aware of information security policies to help protect sensitive AWS information 5) Monitors for nonconformities to the information security policies with a process in place to take corrective actions and enforce appropriate disciplinary action AWS maintains relationships with internal and external parties to monitor legal regulatory and contractual requirements Should a new security directives be issued AWS creates and documents plans to implement the directive within a designated timeframe AWS provides customers with evidence of its compliance with applicable legal regulatory and contractual requirements through audit reports attestations certifications and other compliance enablers Visit awsamazoncom/artifact for information on how to review the AWS external attestation and assurance documentation GRC07 Identify and document all relevant standards regulations legal/contractual and statutory requirements which are applicable to your organization Information System Regulatory Mapping Governance Risk and Compliance GRC 081 Is contact established and maintained with cloudrelated special interest groups and other relevant entities? Yes CSPowned AWS personnel are part of special interest groups including relevant external parties such as security groups AWS personnel use these groups to improve their knowledge about security best practices and to stay up to date with relevant security information GRC08 Establish and maintain contact with cloudrelated special interest groups and other relevant entities in line with business context Special Interest Groups Governance Risk and Compliance Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title HRS 011 Are background verification policies and procedures of all new employees (including but not limited to remote employees contractors and third parties) established documented approved communicated applied evaluated and maintained? Yes CSPowned Where permitted by law AWS requires that employees undergo a background screening at hiring commensurate with their position and level of access (Control AWSCA92) AWS has a process to assess whether AWS employees who have access to resources that store or process customer data via permission groups are subject to a posthire background check as applicable with local law AWS employees who have access to resources that store or process customer data will have a background check no less than once a year (Control AWSCA99) HRS01 Establish document approve communicate apply evaluate and maintain policies and procedures for background verification of all new employees (including but not limited to remote employees contractors and third parties) according to local laws regulations ethics and contractual constraints and proportional to the data classification to be accessed the business requirements and acceptable risk Review and update the policies and procedures at least annually Background Screening Policy and Procedures Human Resources Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title HRS 012 Are background verification policies and procedures designed according to local laws regulations ethics and contractual constraints and proportional to the data classification to be accessed business requirements and acceptable risk? Yes CSPowned AWS conducts criminal background checks as permitted by applicable law as part of pre employment screening practices for employees commensurate with the employee’s position and level of access to AWS facilities The AWS SOC reports provide additional details regarding the controls in place for background verification HRS01 Establish document approve communicate apply evaluate and maintain policies and procedures for background verification of all new employees (including but not limited to remote employees contractors and third parties) according to local laws regulations ethics and contractual constraints and proportional to the data classification to be accessed the business requirements and acceptable risk Review and update the policies and procedures at least annually Background Screening Policy and Procedures Human Resources Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title HRS 013 Are background verification policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis HRS01 Establish document approve communicate apply evaluate and maintain policies and procedures for background verification of all new employees (including but not limited to remote employees contractors and third parties) according to local laws regulations ethics and contractual constraints and proportional to the data classification to be accessed the business requirements and acceptable risk Review and update the policies and procedures at least annually Background Screening Policy and Procedures Human Resources HRS 021 Are policies and procedures for defining allowances and conditions for the acceptable use of organizationally owned or managed assets established documented approved communicated applied evaluated and maintained? Yes CSPowned AWS has implemented data handling and classification requirements that provide specifications around: • Data encryption • Content in transit and during storage • Access • Retention • Physical controls • Mobile devices • Data handling requirements Employees are required to review and signoff on an employment contract which acknowledges their responsibilities to overall Company standards and information security HRS02 Establish document approve communicate apply evaluate and maintain policies and procedures for defining allowances and conditions for the acceptable use of organizationally owned or managed assets Review and update the policies and procedures at least annually Acceptable Use of Technology Policy and Procedures Human Resources Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title HRS 022 Are the policies and procedures for defining allowances and conditions for the acceptable use of organizationally owned or managed assets reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis HRS02 Establish d ocument approve communicate apply evaluate and maintain policies and procedures for defining allowances and conditions for the acceptable use of organizationally owned or managed assets Review and update the policies and procedures at least annually Acceptable Use of Technology Policy and Procedures Human Resources HRS 031 Are policies and procedures requiring unattended workspaces to conceal confidential data established documented approved communicated applied evaluated and maintained? Yes CSPowned AWS roles and responsibilities for maintaining safe and secure working environment are reviewed by independent external auditors during audits for our SOC PCI DSS and ISO 27001 compliance HRS03 Establish document approve communicate apply evaluate and maintain policies and procedures that require unattended workspaces to not have openly visible confidential data Review and update the policies and procedures at least annually Clean Desk Policy and Procedures Human Resources HRS 032 Are policies and procedures requiring unattended workspaces to conceal confidential data reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis HRS03 Establish document approve communicate apply evaluate and maintain policies and procedures that require unattended workspaces to not have openly visible confidential data Review and update the policies and procedures at least annually Clean Desk Policy and Procedures Human Resources Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title HRS 041 Are policies and procedures to protect information accessed processed or stored at remote sites and locations established documented approved communicated applied evaluated and maintained? Yes Shared CSP and CSC AWS has a formal access control policy that is reviewed and updated on an annual basis (or when any major change to the system occurs that impacts the policy) The policy addresses purpose scope roles responsibilities and management commitment AWS employs the concept of least privilege allowing only the necessary access for users to accomplish their job function All access from remote devices to the AWS corporate environment is managed via VPN and MFA The AWS production network is separated from the corporate network by multiple layers of security documented in various control documents discussed in other sections of this response HRS04 Establish document approve communicate apply evaluate and maintain policies and procedures to protect information accessed processed or stored at remote sites and locations Review and update the policies and procedures at least annually Remote and Home Working Policy and Procedures Human Resources HRS 042 Are policies and procedures to protect information accessed processed or stored at remote sites and locations reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis HRS04 Establish document approve communicate apply evaluate and maintain policies and procedures to protect information accessed processed or stored at remote sites and locations Review and update the policies and procedures at least annually Remote and Home Working Policy and Procedures Human Resources HRS 051 Are return procedures of organizationally owned assets by terminated employees established and documented? Yes CSPowned Upon termination of employee or contracts AWS assets in their possessions are retrieved on the date of termination In case of immediate termination the employee/contractor manager retrieves all AWS assets (eg Authentication tokens keys badges) and escorts them out of AWS facility HRS05 Establish and document procedures for the return of organizationowned assets by terminated employees Asset returns Human Resources HRS 061 Are procedures outlining the roles and responsibilities concerning changes in employment established documented and communicated to all personnel? Yes CSPowned AWS Human Resources team defines internal management responsibilities to be followed for termination and role change of employees and vendors AWS SOC reports provide additional details HRS06 Establish document and communicate to all personnel the procedures outlining the roles and responsibilities concerning changes in employment Employment Termination Human Resources Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title HRS 071 Are employees required to sign an employment agreement before gaining access to organizational information systems resources and assets? Yes CSPowned Personnel supporting AWS systems and devices must sign a nondisclosure agreement prior to being granted access Additionally upon hire personnel are required to read and accept the Acceptable Use Policy and the Amazon Code of Business Conduct and Ethics (Code of Conduct) Policy HRS07 Employees sign the employee agreement prior to being granted access to organizational information systems resources and assets Employment Agreement Process Human Resources HRS 081 Are provisions and/or terms for adherence to established information governance and security policies included within employment agreements? Yes CSPowned In alignment with ISO 27001 standard AWS employees complete periodic rolebased training that includes AWS Security training and requires an acknowledgement to complete Compliance audits are periodically performed to validate that employees understand and follow the established policies Refer to SOC reports for additional details HRS08 The organization includes within the employment agreements provisions and/or terms for adherence to established information governance and security policies Employment Agreement Content Human Resources HRS 091 Are employee roles and responsibilities relating to information assets and security documented and communicated? Yes CSPowned AWS implements formal documented policies and procedures that provide guidance for operations and information security within the organization and the supporting AWS environments Policies address purpose scope roles responsibilities and management commitment All policies are maintained in a centralized location that is accessible by employees HRS09 Document and communicate roles and responsibilities of employees as they relate to information assets and security Personnel Roles and Responsibiliti es Human Resources HRS 101 Are requirements for non disclosure/confide ntiality agreements reflecting organizational data protection needs and operational details identified documented and reviewed at planned intervals? Yes CSPowned Amazon Legal Counsel manages and periodically revises the Amazon NDA to reflect AWS business needs HRS10 Identify document and review at planned intervals requirements for non disclosure/confidenti ality agreements reflecting the organization's needs for the protection of data and operational details Non Disclosure Agreements Human Resources Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title HRS 111 Is a security awareness training program for all employees of the organization established documented approved communicated applied evaluated and maintained? Yes CSPowned In alignment with ISO 27001 standard all AWS employees complete periodic Information Security training which requires an acknowledgement to complete Compliance audits are periodically performed to validate that employees understand and follow the established policies AWS roles and responsibilities are reviewed by independent external auditors during audits for our SOC PCI DSS and ISO 27001 compliance HRS11 Establish document approve communicate apply evaluate and maintain a security awareness training program for all employees of the organization and provide regular training updates Security Awareness Training Human Resources HRS 112 Are regular security awareness training updates provided? Yes CSPowned See response to Question ID HRS111 HRS11 Establish document approve communicate apply evaluate and maintain a security awareness training program for all employees of the organization and provide regular training updates Security Awareness Training Human Resources HRS 121 Are all employees granted access to sensitive organizational and personal data provided with appropriate security awareness training? Yes CSPowned In alignment with ISO 27001 standard all AWS employees complete periodic Information Security training which requires an acknowledgement to complete Compliance audits are periodically performed to validate that employees understand and follow the established policies AWS roles and responsibilities are reviewed by independent external auditors during audits for our SOC PCI DSS and ISO 27001 compliance HRS12 Provide all employees with access to sensitive organizational and personal data with appropriate security awareness training and regular updates in organizational procedures processes and policies relating to their professional function relative to the organization Personal and Sensitive Data Awareness and Training Human Resources Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title HRS 122 Are all employees granted access to sensitive organizational and personal data provided with regular updates in procedures processes and policies relating to their professional function? Yes CSPowned AWS has a formal access control policy that is reviewed and updated on an annual basis (or when any major change to the system occurs that impacts the policy) The policy addresses purpose scope roles responsibilities and management commitment AWS employs the concept of least privilege allowing only the necessary access for users to accomplish their job function All access from remote devices to the AWS corporate environment is managed via VPN and MFA The AWS production network is separated from the corporate network by multiple layers of security documented in various control documents discussed in other sections of this response Customers retain the control and responsibility of their data and associated media assets It is the responsibility of the customer to manage mobile security devices and the access to the customer’s content HRS12 Provide all employees with access to sensitive organizational and personal data with appropriate security awareness training and regular updates in organizational procedures processes and policies relating to their professional function relative to the organization Personal and Sensitive Data Awareness and Training Human Resources HRS 131 Are employees notified of their roles and responsibilities to maintain awareness and compliance with established policies procedures and applicable legal statutory or regulatory compliance obligations? Yes CSPowned AWS has implemented various methods of internal communication at a global level to help employees understand their individual roles and responsibilities and to communicate significant events in a timely manner These methods include orientation and training programs for newly hired employee as well as electronic mail messages and the posting of information via the Amazon intranet Refer to ISO 27001 standard Annex A domain 7 and 8 AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard HRS13 Make employees aware of their roles and responsibilities for maintaining awareness and compliance with established policies and procedures and applicable legal statutory or regulatory compliance obligations Compliance User Responsibilit y Human Resources IAM 011 Are identity and access management policies and procedures established documented approved communicated implemented applied evaluated and maintained? Yes CSPowned In alignment with ISO 27001 AWS has a formal access control policy that is reviewed and updated on an annual basis (or when any major change to the system occurs that impacts the policy) The policy addresses purpose scope roles responsibilities and management commitment Access control procedures are systematically enforced through proprietary tools Refer to ISO 27001 Annex A domain 9 for additional details AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard IAM01 Establish document approve communicate implement apply evaluate and maintain policies and procedures for identity and access management Review and update the policies and procedures at least annually Identity and Access Management Policy and Procedures Identity & Access Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IAM 012 Are identity and access management policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis IAM01 Establish document approve communicate implement apply evaluate and maintain policies and procedures for identity and access management Review and update the policies and procedures at least annually Identity and Access Management Policy and Procedures Identity & Access Management IAM 021 Are strong password policies and procedures established documented approved communicated implemented applied evaluated and maintained? Yes CSPowned AWS internal Password Policies and guidelines outlines requirements of password strength and handling for passwords used to access internal systems AWS Identity and Access Management (IAM) enables customers to securely control access to AWS services and resources for their users Additional information about IAM can be found on website at https://awsamazoncom/iam/ AWS SOC reports provide details on the specific control activities executed by AWS IAM02 Establish document approve communicate implement apply evaluate and maintain strong password policies and procedures Review and update the policies and procedures at least annually Strong Password Policy and Procedures Identity & Access Management IAM 022 Are strong password policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis IAM02 Establish document approve communicate implement apply evaluate and maintain strong password policies and procedures Review and update the policies and procedures at least annually Strong Password Policy and Procedures Identity & Access Management IAM 031 Is system identity information and levels of access managed stored and reviewed? Yes Shared CSP and CSC Amazon personnel with a business need to access the management plane are required to first use multifactor authentication distinct from their normal corporate Amazon credentials to gain access to purposebuilt administration hosts These administrative hosts are systems that are specifically designed built configured and hardened to protect the management plane All such access is logged and audited When an employee no longer has a business need to access the management plane the privileges and access to these hosts and relevant systems are revoked AWS customers are responsible for access management within their AWS environments IAM03 Manage store and review the information of system identities and level of access Identity Inventory Identity & Access Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IAM 041 Is the separation of duties principle employed when implementing information system access? Yes Shared CSP and CSC AWS has a formal access control policy that is reviewed and updated on an annual basis (or when any major change to the system occurs that impacts the policy) The policy addresses purpose scope roles responsibilities and management commitment AWS employs the concept of least privilege allowing only the necessary access for users to accomplish their job function All access from remote devices to the AWS corporate environment is managed via VPN and MFA The AWS production network is separated from the corporate network by multiple layers of security documented in various control documents discussed in other sections of this response Customers retain the ability to manage segregations of duties of their AWS resources AWS best practices for Identity & Access Management can be found here: https://docsawsamazon com/IAM/ Search for AWS best practices for Identity & Access Management IAM04 Employ the separation of duties principle when implementing information system access Separation of Duties Identity & Access Management IAM 051 Is the least privilege principle employed when implementing information system access? Yes CSPowned See response to Question ID IAM041 IAM05 Employ the least privilege principle when implementing information system access Least Privilege Identity & Access Management IAM 061 Is a user access provisioning process defined and implemented which authorizes records and communicates data and assets access changes? Yes CSPowned In alignment with ISO 27001 AWS has a formal access control policy that is reviewed and updated on an annual basis (or when any major change to the system occurs that impacts the policy) The policy addresses purpose scope roles responsibilities and management commitment Access control procedures are systematically enforced through proprietary tools Refer to ISO 27001 Annex A domain 9 for additional details AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard IAM06 Define and implement a user access provisioning process which authorizes records and communicates access changes to data and assets User Access Provisioning Identity & Access Management IAM 071 Is a process in place to de provision or modify the access in a timely manner of movers / leavers or system identity changes to effectively adopt and communicate identity and access management policies? Yes CSPowned Access privilege reviews are triggered upon job and/or role transfers initiated from HR system IT access privileges are reviewed on a quarterly basis by appropriate personnel on a regular cadence IT access from AWS systems is terminated within 24 hours of termination or deactivation AWS SOC reports provide further details on User access revocation In addition the AWS Security White paper section "AWS Access" provides additional information Refer to ISO 27001 Annex A domain 9 for additional details AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard IAM07 Deprovision or respectively modify access of movers / leavers or system identity changes in a timely manner in order to effectively adopt and communicate identity and access management policies User Access Changes and Revocation Identity & Access Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IAM 081 Are reviews and revalidation of user access for least privilege and separation of duties completed with a frequency commensurate with organizational risk tolerance? Yes CSPowned Access privilege reviews are triggered upon job and/or role transfers initiated from HR system IT access privileges are reviewed on a quarterly basis by appropriate personnel on a regular cadence IT access from AWS systems is terminated within 24 hours of termination or deactivation AWS SOC reports provide further details on User access revocation In addition the AWS Security White paper section "AWS Access" provides additional information Refer to ISO 27001 Annex A domain 9 for additional details AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard IAM08 Review and revalidate user access for least privilege and separation of duties with a frequency that is commensurate with organizational risk tolerance User Access Review Identity & Access Management IAM 091 Are processes procedures and technical measures for the segregation of privileged access roles defined implemented and evaluated such that administrative data access encryption key management capabilities and logging capabilities are distinct and separate? Yes CSPowned AWS has a formal access control policy that is reviewed and updated on an annual basis (or when any major change to the system occurs that impacts the policy) The policy addresses purpose scope roles responsibilities and management commitment AWS employs the concept of least privilege allowing only the necessary access for users to accomplish their job function All access from remote devices to the AWS corporate environment is managed via VPN and MFA The AWS production network is separated from the corporate network by multiple layers of security documented in various control documents discussed in other sections of this response Customers retain the control and responsibility of their data and associated media assets It is the responsibility of the customer to manage mobile security devices and the access to the customer’s content IAM09 Define implement and evaluate processes procedures and technical measures for the segregation of privileged access roles such that administrative access to data encryption and key management capabilities and logging capabilities are distinct and separated Segregation of Privileged Access Roles Identity & Access Management IAM 101 Is an access process defined and implemented to ensure privileged access roles and rights are granted for a limited period? Yes CSPowned Amazon personnel with a business need to access the management plane are required to first use multifactor authentication distinct from their normal corporate Amazon credentials to gain access to purposebuilt administration hosts These administrative hosts are systems that are specifically designed built configured and hardened to protect the management plane All such access is logged and audited When an employee no longer has a business need to access the management plane the privileges and access to these hosts and relevant systems are revoked Refer to SOC2 report for additional details IAM10 Define and implement an access process to ensure privileged access roles and rights are granted for a time limited period and implement procedures to prevent the culmination of segregated privileged access Management of Privileged Access Roles Identity & Access Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IAM 102 Are procedures implemented to prevent the culmination of segregated privileged access? Yes CSPowned Access to AWS systems are allocated based on least privilege approved by an authorized individual prior to access provisioning Duties and areas of responsibility (for example access request and approval change management request and approval change development testing and deployment etc) are segregated across different individuals to reduce opportunities for an unauthorized or unintentional modification or misuse of AWS systems Group or shared accounts are not permitted within the system boundary IAM10 Define and implement an access process to ensure privileged access roles and rights are granted for a time limited period and implement procedures to prevent the culmination of segregated privileged access Management of Privileged Access Roles Identity & Access Management IAM 111 Are processes and procedures for customers to participate where applicable in granting access for agreed high risk as (defined by the organizational risk assessment) privileged access roles defined implemented and evaluated? No IAM11 Define implement and evaluate processes and procedures for customers to participate where applicable in the granting of access for agreed high risk (as defined by the organizational risk assessment) privileged access roles CSCs Approval for Agreed Privileged Access Roles Identity & Access Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IAM 121 Are processes procedures and technical measures to ensure the logging infrastructure is "readonly" for all with write access (including privileged access roles) defined implemented and evaluated? Yes CSPowned AWS has identified auditable event categories across systems and devices within the AWS system Service teams configure the auditing features to record continuously the security related events in accordance with requirements The log storage system is designed to provide a highly scalable highly available service that automatically increases capacity as the ensuing need for log storage grows Audit records contain a set of data elements in order to support necessary analysis requirements In addition audit records are available for AWS Security team or other appropriate teams to perform inspection or analysis on demand and in response to securityrelated or businessimpacting events Designated personnel on AWS teams receive automated alerts in the event of an audit processing failure Audit processing failures include for example software/hardware errors When alerted oncall personnel issue a trouble ticket and track the event until it is resolved AWS logging and monitoring processes are reviewed by independent third party auditors for our continued compliance with SOC PCI DSS and ISO 27001 compliance IAM12 Define implement and evaluate processes procedures and technical measures to ensure the logging infrastructure is readonly for all with write access including privileged access roles and that the ability to disable it is controlled through a procedure that ensures the segregation of duties and break glass procedures Safeguard Logs Integrity Identity & Access Management IAM 122 Is the ability to disable the "read only" configuration of logging infrastructure controlled through a procedure that ensures the segregation of duties and break glass procedures? Yes CSPowned AWS has identified auditable event categories across systems and devices within the AWS system Service teams configure the auditing features to record continuously the security related events in accordance with requirements The log storage system is designed to provide a highly scalable highly available service that automatically increases capacity as the ensuing need for log storage grows Audit records contain a set of data elements in order to support necessary analysis requirements In addition audit records are available for AWS Security team or other appropriate teams to perform inspection or analysis on demand and in response to securityrelated or businessimpacting events Designated personnel on AWS teams receive automated alerts in the event of an audit processing failure Audit processing failures include for example software/hardware errors When alerted oncall personnel issue a trouble ticket and track the event until it is resolved AWS logging and monitoring processes are reviewed by independent third party auditors for our continued compliance with SOC PCI DSS and ISO 27001 compliance IAM12 Define implement and evaluate processes procedures and technical measures to ensure the logging infrastructure is readonly for all with write access including privileged access roles and that the ability to disable it is controlled through a procedure that ensures the segregation of duties and break glass procedures Safeguard Logs Integrity Identity & Access Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IAM 131 Are processes procedures and technical measures that ensure users are identifiable through unique identification (or can associate individuals with user identification usage) defined implemented and evaluated? Yes CSPowned AWS controls access to systems through authentication that requires a unique user ID and password AWS systems do not allow actions to be performed on the information system without identification or authentication User access privileges are restricted based on business need and job responsibilities AWS employs the concept of least privilege allowing only the necessary access for users to accomplish their job function New user accounts are created to have minimal access User access to AWS systems (for example network applications tools etc) requires documented approval from the authorized personnel (for example user's manager and/or system owner) and validation of the active user in the HR system Refer to SOC2 report for additional details IAM13 Define implement and evaluate processes procedures and technical measures that ensure users are identifiable through unique IDs or which can associate individuals to the usage of user IDs Uniquely Identifiable Users Identity & Access Management IAM 141 Are processes procedures and technical measures for authenticating access to systems application and data assets including multifactor authentication for a least privileged user and sensitive data access defined implemented and evaluated? Yes Shared CSP and CSC Amazon personnel with a business need to access the management plane are required to first use multifactor authentication distinct from their normal corporate Amazon credentials to gain access to purposebuilt administration hosts These administrative hosts are systems that are specifically designed built configured and hardened to protect the management plane All such access is logged and audited When an employee no longer has a business need to access the management plane the privileges and access to these hosts and relevant systems are revoked Refer to SOC2 report for additional details IAM14 Define implement and evaluate processes procedures and technical measures for authenticating access to systems application and data assets including multifactor authentication for at least privileged user and sensitive data access Adopt digital certificates or alternatives which achieve an equivalent level of security for system identities Strong Authenticati on Identity & Access Management IAM 142 Are digital certificates or alternatives that achieve an equivalent security level for system identities adopted? Yes CSPowned AWS Identity Directory and Access Services enable you to add multifactor authentication (MFA) to your applications IAM14 Strong Authenticati on Identity & Access Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IAM 151 Are processes procedures and technical measures for the secure management of passwords defined implemented and evaluated? Yes CSPowned AWS Identity and Access Management (IAM) enables customers to securely control access to AWS services and resources for their users Additional information about IAM can be found on website at https://awsamazoncom/iam/ AWS SOC reports provide details on the specific control activities executed by AWS IAM15 Define implement and evaluate processes procedures and technical measures for the secure management of passwords Passwords Management Identity & Access Management IAM 161 Are processes procedures and technical measures to verify access to data and system functions authorized defined implemented and evaluated? Yes Shared CSP and CSC Controls in place limit access to systems and data and provide that access to systems or data is restricted and monitored In addition customer data and server instances are logically isolated from other customers by default Privileged user access controls are reviewed by an independent auditor during the AWS SOC ISO 27001 and PCI audits AWS Customers retain control and ownership of their data AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure IAM16 Define implement and evaluate processes procedures and technical measures to verify access to data and system functions is authorized Authorizatio n Mechanisms Identity & Access Management IPY 011 Are policies and procedures established documented approved communicated applied evaluated and maintained for communications between application services (eg APIs)? Yes CSPowned Details regarding AWS APIs can be found on the AWS website at: https://awsamazoncom/documentation/ IPY01 Establish document approve communicate apply evaluate and maintain policies and procedures for interoperability and portability including requirements for: a Communications between application interfaces b Information processing interoperability c Application development portability d Information/Data exchange usage portability integrity and persistence Review and update the policies and procedures at least annually Interoperabil ity and Portability Policy and Procedures Interoperabilit y & Portability Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IPY 012 Are policies and procedures established documented approved communicated applied evaluated and maintained for information processing interoperability? Yes CSPowned Details regarding AWS interoperability of each service can be found on the AWS website at: https://awsamazoncom/documentation/ IPY02 Establish document approve communicate apply evaluate and maintain policies and procedures for interoperability and portability including requirements for: a Communications between application interfaces b Information processing interoperability c Application development portability d Information/Data exchange usage portability integrity and persistence Review and update the policies and procedures at least annually Interoperabil ity and Portability Policy and Procedures Interoperabilit y & Portability IPY 013 Are policies and procedures established documented approved communicated applied evaluated and maintained for application development portability? Yes CSPowned Details regarding AWS interoperability of each service can be found on the AWS website at: https://awsamazoncom/documentation/ IPY03 Establish document approve communicate apply evaluate and maintain policies and procedures for interoperability and portability including requirements for: a Communications between application interfaces b Information processing interoperability c Application development portability d Information/Data exchange usage portability integrity and persistence Review and update the policies and procedures at least annually Interoperabil ity and Portability Policy and Procedures Interoperabilit y & Portability Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IPY 014 Are policies and procedures established documented approved communicated applied evaluated and maintained for information/data exchange usage portability integrity and persistence? Yes CSPowned Details regarding AWS interoperability of each service can be found on the AWS website at: https://awsamazoncom/documentation/ IPY04 Establish document approve communicate apply evaluate and maintain policies and procedures for interoperability and portability including requirements for: a Communications between application interfaces b Information processing interoperability c Application development portability d Information/Data exchange usage portability integrity and persistence Review and update the policies and procedures at least annually Interoperabil ity and Portability Policy and Procedures Interoperabilit y & Portability IPY 015 Are interoperability and portability policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis IPY05 Establish document approve communicate apply evaluate and maintain policies and procedures for interoperability and portability including requirements for: a Communications between application interfaces b Information processing interoperability c Application development portability d Information/Data exchange usage portability integrity and persistence Review and update the policies and procedures at least annually Interoperabil ity and Portability Policy and Procedures Interoperabilit y & Portability Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IPY 021 Are CSCs able to programmatically retrieve their data via an application interface(s) to enable interoperability and portability? Yes CSCowned Details regarding AWS interoperability of each service can be found on the AWS website at: https://awsamazoncom /documentation/ IPY02 Provide application interface(s) to CSCs so that they programmatically retrieve their data to enable interoperability and portability Application Interface Availability Interoperabilit y & Portability IPY 031 Are cryptographically secure and standardized network protocols implemented for the management import and export of data? Yes CSPowned AWS APIs and the AWS Management Console are available via TLS protected endpoints which provide server authentication Customers can use TLS for all of their interactions with AWS AWS recommends that customers use secure protocols that offer authentication and confidentiality such as TLS or IPsec to reduce the risk of data tampering or loss AWS enables customers to open a secure encrypted session to AWS servers using HTTPS (Transport Layer Security [TLS]) IPY03 Implement cryptographically secure and standardized network protocols for the management import and export of data Secure Interoperabil ity and Portability Management Interoperabilit y & Portability IPY 041 Do agreements include provisions specifying CSC data access upon contract termination and have the following? a Data format b Duration data will be stored c Scope of the data retained and made available to the CSCs d Data deletion policy Yes Shared CSP and CSC AWS customer agreements include data related provisions upon termination Details regarding contract termination can be found in the example customer agreement see Section 7 Term; Termination https://awsamazoncom/agreement/ IPY04 Agreements must include provisions specifying CSCs access to data upon contract termination and will include: a Data format b Length of time the data will be stored c Scope of the data retained and made available to the CSCs d Data deletion policy Data Portability Contractual Obligations Interoperabilit y & Portability IVS 011 Are infrastructure and virtualization security policies and procedures established documented approved communicated applied evaluated and maintained? Yes CSPowned AWS implements formal documented policies and procedures that provide guidance for operations and information security within the organization and the supporting AWS environments Policies address purpose scope roles responsibilities and management commitment All policies are maintained in a centralized location that is accessible by employees IVS01 Establish document approve communicate apply evaluate and maintain policies and procedures for infrastructure and virtualization security Review and update the policies and procedures at least annually Infrastructur e and Virtualization Security Policy and Procedures Infrastructure & Virtualization Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IVS 012 Are infrastructure and virtualization security policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis IVS01 Establish document approve communicate apply evaluate and maintain policies and procedures for infrastructure and virtualization security Review and update the policies and procedures at least annually Infrastructur e and Virtualization Security Policy and Procedures Infrastructure & Virtualization Security IVS 021 Is resource availability quality and capacity planned and monitored in a way that delivers required system performance as determined by the business? Yes Shared CSP and CSC AWS maintains a capacity planning model to assess infrastructure usage and demands at least monthly and usually more frequently (eg weekly) In addition the AWS capacity planning model supports the planning of future demands to acquire and implement additional resources based upon current resources and forecasted requirements IVS02 Plan and monitor the availability quality and adequate capacity of resources in order to deliver the required system performance as determined by the business Capacity and Resource Planning Infrastructure & Virtualization Security IVS 031 Are communications between environments monitored? Yes Shared CSP and CSC Monitoring and alarming are configured by Service Owners to identify and notify operational and management personnel of incidents when early warning thresholds are crossed on key operational metrics IVS03 Monitor encrypt and restrict communications between environments to only authenticated and authorized connections as justified by the business Review these configurations at least annually and support them by a documented justification of all allowed services protocols ports and compensating controls Network Security Infrastructure & Virtualization Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IVS 032 Are communications between environments encrypted? NA CSCowned AWS APIs are available via TLS protected endpoints which provide server authentication Customers can use TLS for all of their interactions with AWS and within their multiple environment AWS provides open encryption methodologies and enables customers to encrypt and authenticate all traffic and to enforce the latest standards and ciphers IVS04 Monitor encrypt and restrict communications between environments to only authenticated and authorized connections as justified by the business Review these configurations at least annually and support them by a documented justification of all allowed services protocols ports and compensating controls Network Security Infrastructure & Virtualization Security IVS 033 Are communications between environments restricted to only authenticated and authorized connections as justified by the business? Yes Shared CSP and CSC AWS implements least privilege throughout its infrastructure components AWS prohibits all ports and protocols that do not have a specific business purpose AWS follows a rigorous approach to minimal implementation of only those features and functions that are essential to use of the device Network scanning is performed and any unnecessary ports or protocols in use are corrected Customers maintain information related to their data and individual architecture Customers retain the control and responsibility of their data and associated media assets It is the responsibility of the customer to manage their AWS environments and associated access IVS05 Monitor encrypt and restrict communications between environments to only authenticated and authorized connections as justified by the business Review these configurations at least annually and support them by a documented justification of all allowed services protocols ports and compensating controls Network Security Infrastructure & Virtualization Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IVS 034 Are network configurations reviewed at least annually? Yes Shared CSP and CSC Regular internal and external vulnerability scans are performed on the host operating system web application and databases in the AWS environment utilizing a variety of tools Vulnerability scanning and remediation practices are regularly reviewed as a part of AWS continued compliance with PCI DSS and ISO 27001 AWS customers are responsible for configuration management within their AWS environments IVS06 Monitor encrypt and restrict communications between environments to only authenticated and authorized connections as justified by the business Review these configurations at least annually and support them by a documented justification of all allowed services protocols ports and compensating controls Network Security Infrastructure & Virtualization Security IVS 035 Are network configurations supported by the documented justification of all allowed services protocols ports and compensating controls? Yes Shared CSP and CSC AWS implements least privilege throughout its infrastructure components AWS prohibits all ports and protocols that do not have a specific business purpose AWS follows a rigorous approach to minimal implementation of only those features and functions that are essential to use of the device Network scanning is performed and any unnecessary ports or protocols in use are corrected Customers maintain information related to their data and individual architecture AWS customers are responsible for network management within their AWS environments IVS07 Monitor encrypt and restrict communications between environments to only authenticated and authorized connections as justified by the business Review these configurations at least annually and support them by a documented justification of all allowed services protocols ports and compensating controls Network Security Infrastructure & Virtualization Security IVS 041 Is every host and guest OS hypervisor or infrastructure control plane hardened (according to their respective best practices) and supported by technical controls as part of a security baseline? Yes Shared CSP and CSC Regular internal and external vulnerability scans are performed on the host operating system web application and databases in the AWS environment utilizing a variety of tools Vulnerability scanning and remediation practices are regularly reviewed as a part of AWS continued compliance with PCI DSS and ISO 27001 AWS customers are responsible for server and system management within their AWS environments IVS04 Harden host and guest OS hypervisor or infrastructure control plane according to their respective best practices and supported by technical controls as part of a security baseline OS Hardening and Base Controls Infrastructure & Virtualization Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IVS 051 Are production and non production environments separated? Yes CSPowned The development test and production environments emulate the production system environment and are used to properly assess and prepare for the impact of a change to the production system environment In order to reduce the risks of unauthorized access or change to the production environment the development test and production environments are logically separated IVS05 Separate production and nonproduction environments Production and Non Production Environment s Infrastructure & Virtualization Security IVS 061 Are applications and infrastructures designed developed deployed and configured such that CSP and CSC (tenant) user access and intra tenant access is appropriately segmented segregated monitored and restricted from other tenants? Yes CSPowned Customer environments are logically segregated to prevent users and customers from accessing resources not assigned to them Customers maintain full control over who has access to their data Services which provide virtualized operational environments to customers (ie EC2) ensure that customers are segregated from one another and prevent crosstenant privilege escalation and information disclosure via hypervisors and instance isolation Different instances running on the same physical machine are isolated from each other via the hypervisor In addition the Amazon EC2 firewall resides within the hypervisor layer between the physical network interface and the instance's virtual interface All packets must pass through this layer thus an instance’s neighbors have no more access to that instance than any other host on the Internet and can be treated as if they are on separate physical hosts The physical randomaccess memory (RAM) is separated using similar mechanisms IVS06 Design develop deploy and configure applications and infrastructures such that CSP and CSC (tenant) user access and intra tenant access is appropriately segmented and segregated monitored and restricted from other tenants Segmentatio n and Segregation Infrastructure & Virtualization Security IVS 071 Are secure and encrypted communication channels including only uptodate and approved protocols used when migrating servers services applications or data to cloud environments? Yes CSCowned AWS offers a wide variety of services and partner tools to help customer migrate data securely AWS migration services such as AWS Database Migration Service and AWS Snowmobile are integrated with AWS KMS for encryption Learn more about AWS cloud migration services at: https://awsamazoncom /clouddatamigration/ IVS07 Use secure and encrypted communication channels when migrating servers services applications or data to cloud environments Such channels must include only uptodate and approved protocols Migration to Cloud Environment s Infrastructure & Virtualization Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IVS 081 Are highrisk environments identified and documented? NA CSCowned AWS Customers retain responsibility to manage their own network segmentation in adherence with their defined requirements Internally AWS network segmentation is aligned with the ISO 27001 standard AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard IVS08 Identify and document highrisk environments Network Architecture Documentati on Infrastructure & Virtualization Security IVS 091 Are processes procedures and defenseindepth techniques defined implemented and evaluated for protection detection and timely response to networkbased attacks? Yes CSPowned AWS Security regularly scans all Internet facing service endpoint IP addresses for vulnerabilities (these scans do not include customer instances) AWS Security notifies the appropriate parties to remediate any identified vulnerabilities In addition external vulnerability threat assessments are performed regularly by independent security firms Findings and recommendations resulting from these assessments are categorized and delivered to AWS leadership In addition the AWS control environment is subject to regular internal and external risk assessments AWS engages with external certifying bodies and independent auditors to review and test the AWS overall control environment AWS security controls are reviewed by independent external auditors during audits for our SOC PCI DSS and ISO 27001 compliance IVS09 Define implement and evaluate processes procedures and defenseindepth techniques for protection detection and timely response to networkbased attacks Network Defense Infrastructure & Virtualization Security LOG 011 Are logging and monitoring policies and procedures established documented approved communicated applied evaluated and maintained? Yes CSPowned AWS implements formal documented policies and procedures that provide guidance for operations and information security within the organization and the supporting AWS environments Policies address purpose scope roles responsibilities and management commitment All policies are maintained in a centralized location that is accessible by employees LOG01 Establish document approve communicate apply evaluate and maintain policies and procedures for logging and monitoring Review and update the policies and procedures at least annually Logging and Monitoring Policy and Procedures Logging and Monitoring Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title LOG 012 Are policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis LOG01 Establish document approve communicate apply evaluate and maintain policies and procedures for logging and monitoring Review and update the policies and procedures at least annually Logging and Monitoring Policy and Procedures Logging and Monitoring LOG 021 Are processes procedures and technical measures defined implemented and evaluated to ensure audit log security and retention? Yes CSPowned In alignment with ISO 27001 standards audit logs are appropriately restricted and monitored AWS SOC reports provide details on the specific control activities executed by AWS Refer to AWS: Overview of Security Processes for additional details available at: http://awsamazoncom/security/security learning/ LOG02 Define implement and evaluate processes procedures and technical measures to ensure the security and retention of audit logs Audit Logs Protection Logging and Monitoring LOG 031 Are security related events identified and monitored within applications and the underlying infrastructure? NA CSCowned This is a customer responsibility AWS customers are responsible for the applications within their AWS environment LOG03 Identify and monitor securityrelated events within applications and the underlying infrastructure Define and implement a system to generate alerts to responsible stakeholders based on such events and corresponding metrics Security Monitoring and Alerting Logging and Monitoring LOG 032 Is a system defined and implemented to generate alerts to responsible stakeholders based on security events and their corresponding metrics? Yes Shared CSP and CSC AWS Security Metrics are monitored and analyzed in accordance with ISO 27001 standard Refer to ISO 27001 Annex A domain 16 for further details AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard AWS customers are responsible for incident management within their AWS environments LOG03 Identify and monitor securityrelated events within applications and the underlying infrastructure Define and implement a system to generate alerts to responsible stakeholders based on such events and corresponding metrics Security Monitoring and Alerting Logging and Monitoring Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title LOG 041 Is access to audit logs restricted to authorized personnel and are records maintained to provide unique access accountability? Yes CSPowned In alignment with ISO 27001 standards audit logs are appropriately restricted and monitored AWS SOC reports provide details on the specific control activities executed by AWS Refer to AWS: Overview of Security Processes for additional details available at: http://awsamazoncom/security/security learning/ LOG04 Restrict audit logs access to authorized personnel and maintain records that provide unique access accountability Audit Logs Access and Accountabilit y Logging and Monitoring LOG 051 Are security audit logs monitored to detect activity outside of typical or expected patterns? Yes CSPowned AWS provides near real time alerts when the AWS monitoring tools show indications of compromise or potential compromise based upon threshold alarming mechanisms determined by AWS service and Security teams AWS correlates information gained from logical and physical monitoring systems to enhance security on an asneeded basis Upon assessment and discovery of risk Amazon disables accounts that display atypical usage matching the characteristics of bad actors The AWS Security team extracts all log messages related to system access and provides reports to designated officials Log analysis is performed to identify events based on defined risk management parameters LOG05 Monitor security audit logs to detect activity outside of typical or expected patterns Establish and follow a defined process to review and take appropriate and timely actions on detected anomalies Audit Logs Monitoring and Response Logging and Monitoring LOG 052 Is a process established and followed to review and take appropriate and timely actions on detected anomalies? Yes CSPowned See response to Question ID LOG0051 LOG05 Monitor security audit logs to detect activity outside of typical or expected patterns Establish and follow a defined process to review and take appropriate and timely actions on detected anomalies Audit Logs Monitoring and Response Logging and Monitoring LOG 061 Is a reliable time source being used across all relevant information processing systems? Yes CSPowned In alignment with ISO 27001 standards AWS information systems utilize internal system clocks synchronized via NTP (Network Time Protocol) AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard LOG06 Use a reliable time source across all relevant information processing systems Clock Synchronizati on Logging and Monitoring Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title LOG 071 Are logging requirements for information meta/data system events established documented and implemented? Yes CSPowned AWS has identified auditable event categories across systems and devices within the AWS system Service teams configure the auditing features to record continuously the security related events in accordance with requirements The log storage system is designed to provide a highly scalable highly available service that automatically increases capacity as the ensuing need for log storage grows Audit records contain a set of data elements in order to support necessary analysis requirements In addition audit records are available for AWS Security team or other appropriate teams to perform inspection or analysis on demand and in response to securityrelated or businessimpacting events Designated personnel on AWS teams receive automated alerts in the event of an audit processing failure Audit processing failures include for example software/hardware errors When alerted oncall personnel issue a trouble ticket and track the event until it is resolved AWS logging and monitoring processes are reviewed by independent thirdparty auditors for our continued compliance with SOC PCI DSS and ISO 27001 compliance LOG07 Establish document and implement which information meta/data system events should be logged Review and update the scope at least annually or whenever there is a change in the threat environment Logging Scope Logging and Monitoring LOG 072 Is the scope reviewed and updated at least annually or whenever there is a change in the threat environment? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis LOG07 Establish document and implement which information meta/data system events should be logged Review and update the scope at least annually or whenever there is a change in the threat environment Logging Scope Logging and Monitoring Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title LOG 081 Are audit records generated and do they contain relevant security information? Yes CSPowned AWS has identified auditable event categories across systems and devices within the AWS system Service teams configure the auditing features to record continuously the security related events in accordance with requirements The log storage system is designed to provide a highly scalable highly available service that automatically increases capacity as the ensuing need for log storage grows Audit records contain a set of data elements in order to support necessary analysis requirements In addition audit records are available for AWS Security team or other appropriate teams to perform inspection or analysis on demand and in response to securityrelated or businessimpacting events LOG08 Generate audit records containing relevant security information Log Records Logging and Monitoring LOG 091 Does the information system protect audit records from unauthorized access modification and deletion? Yes CSPowned In alignment with ISO 27001 standards audit logs are appropriately restricted and monitored AWS SOC reports provide details on the specific control activities executed by AWS Refer to AWS: Overview of Security Processes for additional details available at: http://awsamazoncom/security/security learning/ LOG09 The information system protects audit records from unauthorized access modification and deletion Log Protection Logging and Monitoring Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title LOG 101 Are monitoring and internal reporting capabilities established to report on cryptographic operations encryption and key management policies processes procedures and controls? Yes Shared CSP and CSC AWS has identified auditable event categories across systems and devices within the AWS system Service teams configure the auditing features to record continuously the security related events in accordance with requirements The log storage system is designed to provide a highly scalable highly available service that automatically increases capacity as the ensuing need for log storage grows Audit records contain a set of data elements in order to support necessary analysis requirements In addition audit records are available for AWS Security team or other appropriate teams to perform inspection or analysis on demand and in response to securityrelated or businessimpacting events Designated personnel on AWS teams receive automated alerts in the event of an audit processing failure Audit processing failures include for example software/hardware errors When alerted oncall personnel issue a trouble ticket and track the event until it is resolved AWS logging and monitoring processes are reviewed by independent thirdparty auditors for our continued compliance with SOC PCI DSS and ISO 27001 compliance AWS customers are responsible for key management within their AWS environments LOG10 Establish and maintain a monitoring and internal reporting capability over the operations of cryptographic encryption and key management policies processes procedures and controls Encryption Monitoring and Reporting Logging and Monitoring LOG 111 Are key lifecycle management events logged and monitored to enable auditing and reporting on cryptographic keys' usage? NA CSCowned This is a customer responsibility LOG11 Log and monitor key lifecycle management events to enable auditing and reporting on usage of cryptographic keys Transaction/ Activity Logging Logging and Monitoring LOG 121 Is physical access logged and monitored using an auditable access control system? Yes CSPowned Access to data center is logged Only authorized users are allowed into data centers Visitors follow the visitor access process and their relevant details along with business purpose is logged in the data center access log system The access log is retained for 90 days unless longer retention is legally required LOG12 Monitor and log physical access using an auditable access control system Access Control Logs Logging and Monitoring Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title LOG 131 Are processes and technical measures for reporting monitoring system anomalies and failures defined implemented and evaluated? Yes CSPowned In alignment with ISO 27001 standards audit logs are appropriately restricted and monitored AWS SOC reports provide details on the specific control activities executed by AWS Refer to AWS: Overview of Security Processes for additional details available at: http://awsamazoncom/security/security learning/ LOG13 Define implement and evaluate processes procedures and technical measures for the reporting of anomalies and failures of the monitoring system and provide immediate notification to the accountable party Failures and Anomalies Reporting Logging and Monitoring LOG 132 Are accountable parties immediately notified about anomalies and failures? Yes CSPowned AWS provides near real time alerts when the AWS monitoring tools show indications of compromise or potential compromise based upon threshold alarming mechanisms determined by AWS service and Security teams AWS correlates information gained from logical and physical monitoring systems to enhance security on an asneeded basis Upon assessment and discovery of risk Amazon disables accounts that display atypical usage matching the characteristics of bad actors The AWS Security team extracts all log messages related to system access and provides reports to designated officials Log analysis is performed to identify events based on defined risk management parameters LOG13 Define implement and evaluate processes procedures and technical measures for the reporting of anomalies and failures of the monitoring system and provide immediate notification to the accountable party Failures and Anomalies Reporting Logging and Monitoring SEF 011 Are policies and procedures for security incident management e discovery and cloud forensics established documented approved communicated applied evaluated and maintained? Yes CSPowned AWS' incident response program plans and procedures have been developed in alignment with ISO 27001 standard AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard In addition the AWS: Overview of Security Processes Whitepaper provides further details available at: http://awsamazoncom/security/security learning/ SEF01 Establish document approve communicate apply evaluate and maintain policies and procedures for Security Incident Management E Discovery and Cloud Forensics Review and update the policies and procedures at least annually Security Incident Management Policy and Procedures Security Incident Management EDiscovery & Cloud Forensics Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title SEF 012 Are policies and procedures reviewed and updated annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis SEF01 Establish document approve communicate apply evaluate and maintain policies and procedures for Security Incident Management E Discovery and Cloud Forensics Review and update the policies and procedures at least annually Security Incident Management Policy and Procedures Security Incident Management EDiscovery & Cloud Forensics SEF 021 Are policies and procedures for timely management of security incidents established documented approved communicated applied evaluated and maintained? Yes CSPowned See response to Question ID SEF011 SEF02 Establish document approve communicate apply evaluate and maintain policies and procedures for the timely management of security incidents Review and update the policies and procedures at least annually Service Management Policy and Procedures Security Incident Management EDiscovery & Cloud Forensics SEF 022 Are policies and procedures for timely management of security incidents reviewed and updated at least annually? Yes CSPowned See response to Question ID SEF012 SEF02 Establish document approve communicate apply evaluate and maintain policies and procedures for the timely management of security incidents Review and update the policies and procedures at least annually Service Management Policy and Procedures Security Incident Management EDiscovery & Cloud Forensics Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title SEF 031 Is a security incident response plan that includes relevant internal departments impacted CSCs and other businesscritical relationships (such as supplychain) established documented approved communicated applied evaluated and maintained? Yes CSPowned See response to Question ID SEF011 SEF03 'Establish document approve communicate apply evaluate and maintain a security incident response plan which includes but is not limited to: relevant internal departments impacted CSCs and other business critical relationships (such as supply chain) that may be impacted' Incident Response Plans Security Incident Management EDiscovery & Cloud Forensics SEF 041 Is the security incident response plan tested and updated for effectiveness as necessary at planned intervals or upon significant organizational or environmental changes? Yes CSPowned AWS incident response plans are tested on at least on an annual basis SEF04 Test and update as necessary incident response plans at planned intervals or upon significant organizational or environmental changes for effectiveness Incident Response Testing Security Incident Management EDiscovery & Cloud Forensics SEF 051 Are information security incident metrics established and monitored? Yes CSPowned AWS Security Metrics are monitored and analyzed in accordance with ISO 27001 standard Refer to ISO 27001 Annex A domain 16 for further details AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard SEF05 Establish and monitor information security incident metrics Incident Response Metrics Security Incident Management EDiscovery & Cloud Forensics SEF 061 Are processes procedures and technical measures supporting business processes to triage security related events defined implemented and evaluated? Yes CSPowned AWS' incident response program plans and procedures have been developed in alignment with ISO 27001 standard AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard In addition the AWS: Overview of Security Processes Whitepaper provides further details available at: http://awsamazoncom/security/security learning/ SEF06 Define implement and evaluate processes procedures and technical measures supporting business processes to triage security related events Event Triage Processes Security Incident Management EDiscovery & Cloud Forensics Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title SEF 071 Are processes procedures and technical measures for security breach notifications defined and implemented? Yes CSPowned AWS employees are trained on how to recognize suspected security incidents and where to report them When appropriate incidents are reported to relevant authorities AWS maintains the AWS security bulletin webpage located at: https://awsamazoncom/security/security bulletins to notify customers of security and privacy events affecting AWS services Customers can subscribe to the Security Bulletin RSS Feed to keep abreast of security announcements on the Security Bulletin webpage The customer support team maintains a Service Health Dashboard webpage located at: http://statusawsamazoncom/ to alert customers to any broadly impacting availability issues SEF07 Define and implement processes procedures and technical measures for security breach notifications Report security breaches and assumed security breaches including any relevant supply chain breaches as per applicable SLAs laws and regulations Security Breach Notification Security Incident Management EDiscovery & Cloud Forensics SEF 072 Are security breaches and assumed security breaches reported (including any relevant supply chain breaches) as per applicable SLAs laws and regulations? Yes CSPowned AWS maintains the AWS security bulletin webpage located at: https://awsamazoncom/security/security bulletins to notify customers of security and privacy events affecting AWS services Customers can subscribe to the Security Bulletin RSS Feed to keep abreast of security announcements on the Security Bulletin webpage The customer support team maintains a Service Health Dashboard webpage located at: http://statusawsamazoncom/ to alert customers to any broadly impacting availability issues SEF07 Define and implement processes procedures and technical measures for security breach notifications Report security breaches and assumed security breaches including any relevant supply chain breaches as per applicable SLAs laws and regulations Security Breach Notification Security Incident Management EDiscovery & Cloud Forensics SEF 081 Are points of contact maintained for applicable regulation authorities national and local law enforcement and other legal jurisdictional authorities? Yes CSPowned AWS maintains contacts with industry bodies risk and compliance organizations local authorities and regulatory bodies as required by the ISO 27001 standard AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard SEF08 Maintain points of contact for applicable regulation authorities national and local law enforcement and other legal jurisdictional authorities Points of Contact Maintenance Security Incident Management EDiscovery & Cloud Forensics Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title STA 011 Are policies and procedures implementing the shared security responsibility model (SSRM) within the organization established documented approved communicated applied evaluated and maintained? Yes CSPowned Security and Compliance is a shared responsibility between AWS and the customer The shared model can help relieve the customer's operational burden as AWS operates manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates Refer to shared responsibility model: https://awsamazoncom/compliance/shared responsibilitymodel/ STA01 Establish document approve communicate apply evaluate and maintain policies and procedures for the application of the Shared Security Responsibility Model (SSRM) within the organization Review and update the policies and procedures at least annually SSRM Policy and Procedures Supply Chain Management Transparency and Accountability STA 012 Are the policies and procedures that apply the SSRM reviewed and updated annually? Yes CSPowned Security and Compliance is a shared responsibility between AWS and the customer AWS Information Security Management System policies that are in scope for SSRM are reviewed and updated annually and as necessary The shared model can help relieve the customer's operational burden as AWS operates manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates Refer to shared responsibility model: https://awsamazoncom/compliance/shared responsibilitymodel/ STA01 Establish document approve communicate apply evaluate and maintain policies and procedures for the application of the Shared Security Responsibility Model (SSRM) within the organization Review and update the policies and procedures at least annually SSRM Policy and Procedures Supply Chain Management Transparency and Accountability STA 021 Is the SSRM applied documented implemented and managed throughout the supply chain for the cloud service offering? NA CSPowned AWS proactively informs our customers of any subcontractors who have access to customer owned content you upload onto AWS including content that may contain personal data There are no subcontractors authorized by AWS to access any customerowned content that you upload onto AWS To monitor subcontractor access yearround please refer to: https://awsamazoncom/compliance/third partyaccess/ STA02 Apply document implement and manage the SSRM throughout the supply chain for the cloud service offering SSRM Supply Chain Supply Chain Management Transparency and Accountability STA 031 Is the CSC given SSRM guidance detailing information about SSRM applicability throughout the supply chain? NA CSPowned AWS proactively informs our customers of any subcontractors who have access to customer owned content you upload onto AWS including content that may contain personal data There are no subcontractors authorized by AWS to access any customerowned content that you upload onto AWS To monitor subcontractor access yearround please refer to: https://awsamazoncom/compliance/third partyaccess/ STA03 Provide SSRM Guidance to the CSC detailing information about the SSRM applicability throughout the supply chain SSRM Guidance Supply Chain Management Transparency and Accountability Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title STA 041 Is the shared ownership and applicability of all CSA CCM controls delineated according to the SSRM for the cloud service offering? Yes CSPowned Security and Compliance is a shared responsibility between AWS and the customer This varies by cloud services used the shared model can help relieve the customer's operational burden as AWS operates manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates Refer to shared responsibility model: https://awsamazoncom/compliance/shared responsibilitymodel/ STA04 Delineate the shared ownership and applicability of all CSA CCM controls according to the SSRM for the cloud service offering SSRM Control Ownership Supply Chain Management Transparency and Accountability STA 051 Is SSRM documentation for all cloud services the organization uses reviewed and validated? Yes CSPowned Security and Compliance is a shared responsibility between AWS and the customer The shared model can help relieve the customer's operational burden as AWS operates manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates Refer to shared responsibility model: https://awsamazoncom/compliance/shared responsibilitymodel/ STA05 Review and validate SSRM documentation for all cloud services offerings the organization uses SSRM Documentati on Review Supply Chain Management Transparency and Accountability STA 061 Are the portions of the SSRM the organization is responsible for implemented operated audited or assessed? Yes CSPowned AWS has established a formal periodic audit program that includes continual independent internal and external assessments to validate the implementation and operating effectiveness of the AWS control environment STA06 Implement operate and audit or assess the portions of the SSRM which the organization is responsible for SSRM Control Implementati on Supply Chain Management Transparency and Accountability STA 071 Is an inventory of all supply chain relationships developed and maintained? NA CSPowned AWS performs periodic reviews of SSRM service and colocation providers to validate adherence with AWS security and operational standards AWS maintains standard contract review and signature processes that include legal reviews with consideration of protecting AWS resources AWS proactively informs our customers of any subcontractors who have access to customerowned content you upload onto AWS including content that may contain personal data There are no subcontractors authorized by AWS to access any customer owned content that you upload onto AWS STA07 Develop and maintain an inventory of all supply chain relationships Supply Chain Inventory Supply Chain Management Transparency and Accountability Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title STA 081 Are risk factors associated with all organizations within the supply chain periodically reviewed by CSPs? NA CSPowned AWS performs periodic reviews of SSRM service and colocation providers to validate adherence with AWS security and operational standards AWS maintains standard contract review and signature processes that include legal reviews with consideration of protecting AWS resources AWS proactively informs our customers of any subcontractors who have access to customerowned content you upload onto AWS including content that may contain personal data There are no subcontractors authorized by AWS to access any customer owned content that you upload onto AWS STA08 CSPs periodically review risk factors associated with all organizations within their supply chain Supply Chain Risk Management Supply Chain Management Transparency and Accountability STA 091 Do service agreements between CSPs and CSCs (tenants) incorporate at least the following mutually agreed upon provisions and/or terms? • Scope characteristics and location of business relationship and services offered • Information security requirements (including SSRM) • Change management process • Logging and monitoring capability • Incident management and communication procedures • Right to audit and thirdparty assessment • Service termination • Interoperability and portability requirements • Data privacy Yes Shared CSP and CSC AWS service agreements includes multiple provisions and terms For additional details refer to following sample AWS Customer Agreement online https://awsamazoncom/agreement/ STA09 Service agreements between CSPs and CSCs (tenants) must incorporate at least the following mutuallyagreed upon provisions and/or terms: • Scope characteristics and location of business relationship and services offered • Information security requirements (including SSRM) • Change management process • Logging and monitoring capability • Incident management and communication procedures • Right to audit and third party assessment • Service termination • Interoperability and portability requirements • Data privacy Primary Service and Contractual Agreement Supply Chain Management Transparency and Accountability Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title STA 101 Are supply chain agreements between CSPs and CSCs reviewed at least annually? Yes CSPowned AWS' third party agreement processes include periodic review and reporting and are reviewed by independent auditors STA10 Review supply chain agreements between CSPs and CSCs at least annually Supply Chain Agreement Review Supply Chain Management Transparency and Accountability STA 111 Is there a process for conducting internal assessments at least annually to confirm the conformance and effectiveness of standards policies procedures and SLA activities? Yes CSPowned AWS has established a formal periodic audit program that includes continual independent internal and external assessments to validate the implementation and operating effectiveness of the AWS control environment STA11 Define and implement a process for conducting internal assessments to confirm conformance and effectiveness of standards policies procedures and service level agreement activities at least annuall y Internal Compliance Testing Supply Chain Management Transparency and Accountability STA 121 Are policies that require all supply chain CSPs to comply with information security confidentiality access control privacy audit personnel policy and service level requirements and standards implemented? Yes CSPowned AWS' third party agreement processes include periodic review and reporting and are reviewed by independent auditors STA12 Implement policies requiring all CSPs throughout the supply chain to comply with information security confidentiality access control privacy audit personnel policy and service level requirements and standards Supply Chain Service Agreement Compliance Supply Chain Management Transparency and Accountability STA 131 Are supply chain partner IT governance policies and procedures reviewed periodically? NA CSPowned AWS does not utilize third parties to provide services to customers but does utilize co location provides in limited capacity to house some AWS data centers These controls are audited twice annually in our SOC 1/2 audits and annually in our ISO 27001/17/18 audits There are no subcontractors authorized by AWS to access any customerowned content that customers upload onto AWS To monitor subcontractor access yearround please refer to: https://awsamazoncom/compliance/third partyaccess/ STA13 Periodically review the organization's supply chain partners' IT governance policies and procedures Supply Chain Governance Review Supply Chain Management Transparency and Accountability Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title STA 141 Is a process to conduct periodic security assessments for all supply chain organizations defined and implemented? NA CSPowned AWS does not utilize third parties to provide services to customers but does utilize co location provides in limited capacity to house some AWS data centers These controls are audited twice annually in our SOC 1/2 audits and annually in our ISO 27001/17/18 audits There are no subcontractors authorized by AWS to access any customerowned content that customers upload onto AWS To monitor subcontractor access yearround please refer to: https://awsamazoncom/compliance/third partyaccess/ STA14 Define and implement a process for conducting security assessments periodically for all organizations within the supply chain Supply Chain Data Security Assessment Supply Chain Management Transparency and Accountability TVM 011 Are policies and procedures established documented approved communicated applied evaluated and maintained to identify report and prioritize the remediation of vulnerabilities to protect systems against vulnerability exploitation? Yes CSPowned The AWS Security team notifies and coordinates with the appropriate Service Teams when conducting securityrelated activities within the system boundary Activities include vulnerability scanning contingency testing and incident response exercises AWS performs external vulnerability assessments at least quarterly and identified issues are investigated and tracked to resolution Additionally AWS performs unannounced penetration tests by engaging independent thirdparties to probe the defenses and device configuration settings within the system TVM01 Establish document approve communicate apply evaluate and maintain policies and procedures to identify report and prioritize the remediation of vulnerabilities in order to protect systems against vulnerability exploitation Review and update the policies and procedures at least annually Threat and Vulnerability Management Policy and Procedures Threat & Vulnerability Management TVM 012 Are threat and vulnerability management policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis TVM01 Establish document approve communicate apply evaluate and maintain policies and procedures to identify report and prioritize the remediation of vulnerabilities in order to protect systems against vulnerability exploitation Review and update the policies and procedures at least annually Threat and Vulnerability Management Policy and Procedures Threat & Vulnerability Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title TVM 021 Are policies and procedures to protect against malware on managed assets established documented approved communicated applied evaluated and maintained? Yes CSPowned AWS' program processes and procedures to managing antivirus / malicious software is in alignment with ISO 27001 standards Refer to AWS SOC reports provides further details In addition refer to ISO 27001 standard Annex A domain 12 for additional details AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard TVM02 Establish document approve communicate apply evaluate and maintain policies and procedures to protect against malware on managed assets Review and update the policies and procedures at least annually Malware Protection Policy and Procedures Threat & Vulnerability Management TVM 022 Are asset management and malware protection policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis TVM02 Establish document approve communicate apply evaluate and maintain policies and procedures to protect against malware on managed assets Review and update the policies and procedures at least annually Malware Protection Policy and Procedures Threat & Vulnerability Management TVM 031 Are processes procedures and technical measures defined implemented and evaluated to enable scheduled and emergency responses to vulnerability identifications (based on the identified risk)? Yes CSPowned See response to Question ID TVM011 TVM03 Define implement and evaluate processes procedures and technical measures to enable both scheduled and emergency responses to vulnerability identifications based on the identified risk Vulnerability Remediation Schedule Threat & Vulnerability Management TVM 041 Are processes procedures and technical measures defined implemented and evaluated to update detection tools threat signatures and compromise indicators weekly (or more frequent) basis? Yes CSPowned AWS' program processes and procedures to managing antivirus / malicious software is in alignment with ISO 27001 standards Refer to AWS SOC reports provides further details In addition refer to ISO 27001 standard Annex A domain 12 for additional details AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard TVM04 Define implement and evaluate processes procedures and technical measures to update detection tools threat signatures and indicators of compromise on a weekly or more frequent basis Detection Updates Threat & Vulnerability Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title TVM 051 Are processes procedures and technical measures defined implemented and evaluated to identify updates for applications that use third party or open source libraries (according to the organization's vulnerability management policy)? Yes CSPowned AWS implements open source software or custom code within its services All open source software to include binary or machine executable code from thirdparties is reviewed and approved by the Open Source Group prior to implementation and has source code that is publicly accessible AWS service teams are prohibited from implementing code from third parties unless it has been approved through the open source review All code developed by AWS is available for review by the applicable service team as well as AWS Security By its nature open source code is available for review by the Open Source Group prior to granting authorization for use within Amazon TVM05 Define implement and evaluate processes procedures and technical measures to identify updates for applications which use third party or open source libraries according to the organization's vulnerability management policy External Library Vulnerabilitie s Threat & Vulnerability Management TVM 061 Are processes procedures and technical measures defined implemented and evaluated for periodic independent thirdparty penetration testing? Yes CSPowned AWS Security regularly performs penetration testing These engagements may include carefully selected industry experts and independent security firms AWS does not share the results directly with customers AWS thirdparty auditors review the results to verify frequency of penetration testing and remediation of findings TVM06 Define implement and evaluate processes procedures and technical measures for the periodic performance of penetration testing by independent third parties Penetration Testing Threat & Vulnerability Management TVM 071 Are processes procedures and technical measures defined implemented and evaluated for vulnerability detection on organizationally managed assets at least monthly? No CSPowned AWS Security performs regular vulnerability scans on the host operating system web application and databases in the AWS environment using a variety of tools External vulnerability assessments are conducted by an AWS approved third party vendor at least quarterly TVM07 Define implement and evaluate processes procedures and technical measures for the detection of vulnerabilities on organizationally managed assets at least monthly Vulnerability Identification Threat & Vulnerability Management TVM 081 Is vulnerability remediation prioritized using a riskbased model from an industry recognized framework? Yes CSPowned AWS Security performs regular vulnerability scans on the host operating system web application and databases in the AWS environment using a variety of tools TVM08 Use a riskbased model for effective prioritization of vulnerability remediation using an industry recognized framework Vulnerability Prioritization Threat & Vulnerability Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title TVM 091 Is a process defined and implemented to track and report vulnerability identification and remediation activities that include stakeholder notification? Yes CSPowned The AWS Security team notifies and coordinates with the appropriate Service Teams when conducting securityrelated activities within the system boundary Activities include vulnerability scanning contingency testing and incident response exercises AWS performs external vulnerability assessments at least quarterly and identified issues are investigated and tracked to resolution Additionally AWS performs unannounced penetration tests by engaging independent thirdparties to probe the defenses and device configuration settings within the system TVM09 Define and implement a process for tracking and reporting vulnerability identification and remediation activities that includes stakeholder notification Vulnerability Management Reporting Threat & Vulnerability Management TVM 101 Are metrics for vulnerability identification and remediation established monitored and reported at defined intervals? Yes Shared CSP and CSC AWS tracks metrics for internal process measurements and improvements that align with our policies and standards AWS customers are responsible for vulnerability management within their AWS environments TVM10 Establish monitor and report metrics for vulnerability identification and remediation at defined intervals Vulnerability Management Metrics Threat & Vulnerability Management UEM 011 Are policies and procedures established documented approved communicated applied evaluated and maintained for all endpoints? Yes CSPowned AWS implements formal documented policies and procedures that provide guidance for operations and information security within the organization and the supporting AWS environments Policies address purpose scope roles responsibilities and management commitment All policies are maintained in a centralized location that is accessible by employees UEM01 Establish document approve communicate apply evaluate and maintain policies and procedures for all endpoints Review and update the policies and procedures at least annually Endpoint Devices Policy and Procedures Universal Endpoint Management UEM 012 Are universal endpoint management policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis UEM01 Establish document approve communicate apply evaluate and maintain policies and procedures for all endpoints Review and update the policies and procedures at least annually Endpoint Devices Policy and Procedures Universal Endpoint Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title UEM 021 Is there a defined documented applicable and evaluated list containing approved services applications and the sources of applications (stores) acceptable for use by endpoints when accessing or storing organization managed data? Yes CSPowned Amazon has established baseline infrastructure standards in alignment with industry best practices All software installations are still monitored by AWS security and mandatory security controls and software is always required Users cannot continue to use their laptop or desktop if required software is not installed Their device will be quarantined from network access until the nonconformance is resolved UEM02 Define document apply and evaluate a list of approved services applications and sources of applications (stores) acceptable for use by endpoints when accessing or storing organization managed data Application and Service Approval Universal Endpoint Management UEM 031 Is a process defined and implemented to validate endpoint device compatibility with operating systems and applications? Yes CSPowned Amazon has established baseline infrastructure standards in alignment with industry best practices This includes endpoint compatibility with operating systems and applications UEM03 Define and implement a process for the validation of the endpoint device's compatibility with operating systems and applications Compatibilit y Universal Endpoint Management UEM 041 Is an inventory of all endpoints used and maintained to store and access company data? Yes CSPowned Amazon has established baseline infrastructure standards in alignment with industry best practices This includes endpoint inventory management UEM04 Maintain an inventory of all endpoints used to store and access company data Endpoint Inventory Universal Endpoint Management UEM 051 Are processes procedures and technical measures defined implemented and evaluated to enforce policies and controls for all endpoints permitted to access systems and/or store transmit or process organizational data? NA AWS employees do not access process or change customer data in the course of providing our services AWS has separate CORP and PROD environments which are separated from each other via physical and logical controls Only approved users would have the ability to be granted access from CORP to PROD That access is then managed by separate permission system requires an approved ticket requires MFA is time limited and all activities are tracked UEM05 Define implement and evaluate processes procedures and technical measures to enforce policies and controls for all endpoints permitted to access systems and/or store transmit or process organizational data Endpoint Management Universal Endpoint Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title UEM 061 Are all relevant interactiveuse endpoints configured to require an automatic lock screen? Yes CSPowned Amazon has established baseline infrastructure standards in alignment with industry best practices These include automatic lockout after defined period of inactivity UEM06 Configure all relevant interactiveuse endpoints to require an automatic lock screen Automatic Lock Screen Universal Endpoint Management UEM 071 Are changes to endpoint operating systems patch levels and/or applications managed through the organizational change management process? Yes CSPowned Amazon has established baseline infrastructure standards in alignment with industry best practices All software installations are still monitored by AWS security and mandatory security controls and software is always required Users cannot continue to use their laptop or desktop if required software is not installed Their device will be quarantined from network access until the nonconformance is resolved UEM07 Manage changes to endpoint operating systems patch levels and/or applications through the company's change management processes Operating Systems Universal Endpoint Management UEM 081 Is information protected from unauthorized disclosure on managed endpoints with storage encryption? NA CSPowned AWS employees do not access process or change customer data in the course of providing our services AWS has separate CORP and PROD environments which are separated from each other via physical and logical controls Only approved users would have the ability to be granted access from CORP to PROD That access is then managed by separate permission system requires an approved ticket requires MFA is time limited and all activities are tracked Additionally customers are provided tools to encrypt data within AWS environment to add additional layers of security The encrypted data can only be accessed by authorized customer personnel with access to encryption keys UEM08 Protect information from unauthorized disclosure on managed endpoint devices with storage encryption Storage Encryption Universal Endpoint Management UEM 091 Are antimalware detection and prevention technology services configured on managed endpoints? Yes CSPowned AWS' program processes and procedures to managing antivirus / malicious software is in alignment with ISO 27001 standards Refer to AWS SOC reports provides further details In addition refer to ISO 27001 standard Annex A domain 12 for additional details AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard UEM09 Configure managed endpoints with anti malware detection and prevention technology and services Anti Malware Detection and Prevention Universal Endpoint Management UEM 101 Are software firewalls configured on managed endpoints? Yes CSPowned Amazon assets (eg laptops) are configured with antivirus software that includes email filtering software firewalls and malware detection UEM10 Configure managed endpoints with properly configured software firewalls Software Firewall Universal Endpoint Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title UEM 111 Are managed endpoints configured with data loss prevention (DLP) technologies and rules per a risk assessment? NA AWS employees do not access process or change customer data in the course of providing our services AWS has separate CORP and PROD environments which are separated from each other via physical and logical controls AWS customers are responsible for the management of the data they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure UEM11 Configure managed endpoints with Data Loss Prevention (DLP) technologies and rules in accordance with a risk assessment Data Loss Prevention Universal Endpoint Management UEM 121 Are remote geolocation capabilities enabled for all managed mobile endpoints? No CSPowned No response is required as we have indicated no UEM12 Enable remote geo location capabilities for all managed mobile endpoints Remote Locate Universal Endpoint Management UEM 131 Are processes procedures and technical measures defined implemented and evaluated to enable remote company data deletion on managed endpoint devices? Yes CSPowned AWS scope for mobile devices are iOS and Android based mobile phones and tablets AWS maintains a formal mobile device policy and associated procedures Specifically AWS mobile devices are only allowed access to AWS corporate fabric resources and cannot access AWS production fabric where customer content is stored AWS production fabric is separated from the corporate fabric by boundary protection devices that control the flow of information between fabrics Approved firewall rule sets and access control lists between network fabrics restrict the flow of information to specific information system services Access control lists and rule sets are reviewed and approved and are automatically pushed to boundary protection devices on a periodic basis (at least every 24 hours) to ensure rulesets and access control lists are up todate Consequently mobile devices are not relevant to AWS customer content access UEM13 Define implement and evaluate processes procedures and technical measures to enable the deletion of company data remotely on managed endpoint devices Remote Wipe Universal Endpoint Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title UEM 141 Are processes procedures and technical and/or contractual measures defined implemented and evaluated to maintain proper security of third party endpoints with access to organizational assets? NA AWS does not utilize third parties to provide services to customers but does utilize co location provides in limited capacity to house some AWS data centers These controls are audited twice annually in our SOC 1/2 audits and annually in our ISO 27001/17/18 audits There are no subcontractors authorized by AWS to access any customerowned content that customers upload onto AWS To monitor subcontractor access yearround please refer to: https://awsamazoncom/compliance/third partyaccess/ UEM14 Define implement and evaluate processes procedures and technical and/or contractual measures to maintain proper security of thirdparty endpoints with access to organizational assets ThirdParty Endpoint Security Posture Universal Endpoint Management End of Standard Further Reading For additional information see the following sources:  AWS Compliance Quick Reference Guide  AWS Answers to Key Compliance Questions  AWS Cloud Security Alliance (CSA) Overview Document Revisions Date Description April 2022 Updated CAIQ template and updated responses to individual questions based on CAIQ v402 July 2018 2018 validation and update January 2018 Migrated to new template January 2016 First publication
General
How_Cities_Can_Stop_Wasting_Money_Move_Faster_and_Innovate
ArchivedHow Cities Can Stop Wasting Money Move Faster and Innovate Simplify and Streamline IT with AWS Cloud Computing January 2016 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: h ttps://awsamazoncom/whitepapersArchivedAmazon Web Services – Stop Wasting Money Move Faster and Innovate January 2016 Page 3 of 16 © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – Stop Wasting Money Move Faster and Innovate January 2016 Page 4 of 16 Contents Abstract 4 Stop Investing in Technology Infrastructure 5 Trend Toward the Cloud 6 Move Faster 7 Pick Your Project Pick One Thing 8 Manage the Scope 10 Take Advantage of New Innovations 12 Engage Your Citizens in Crowdsourcing 12 Automate Critical Functions for Citizens 14 Start Your Journey 15 Contributors 16 Abstract Local and r egional governments around the world are using the cloud to transform services improve their operations and reach new horizons for citizen services The Amazon Web Services (AWS) cloud enables data col lection analysis and decision making for smarter cities This whitepaper provides strategic considerations for local and regional governments to consider as they identify which IT systems and applications to move to the cloud Real examples that show how cities can stop wasting money move faster and innovate ArchivedAmazon Web Services – Stop Wasting Money Move Faster and Innovate January 2016 Page 5 of 16 Stop Investing in Technology Infrastructure Faced with pressure to innovate within fixed or shrinking budgets while meeting aggressive timelines governments are turning to Amazon Web S ervices (AWS) to provide costeffective scalable secure and flexible infrastructure necessary to make a difference The cloud provides rapid access to flexible and low cost IT resources With cloud computing local and regional governments no longer need to make large upfront investments in hardware or spend a lot of time and money on the heavy lifting of managing hardware “I wanted to move to a model where we can deliver more to our citizens and reduce the cost of delivering those services to them I wanted a product line that has the ability to scale and grow with my department AWS was an easy fit for us and the way we do business By shifting from capex to opex we can free up money and return those funds to areas that need it more—fire trucks a bridge or a sidewalk” Chris Chiancone CIO City of McKinney Instead government agencies can provision exactly the right type and size of computing resources needed to power your newest bright idea and drive operational efficiencies with your IT budget You can access as many resources as you need almost instantly and only pay for what you use AWS helps agencies reduce overall IT costs in multiple ways With cloud computing you do not have to invest in infrastructure before you know what AWS Cloud Computing AWS offers a broad set of global compute storage database analytics application and deployment services that help local and regional governments move faster lower IT costs and scale applications ArchivedAmazon Web Services – Stop Wasting Money Move Faster and Innovate January 2016 Page 6 of 16 demand will be You convert your capital expense into variable expense that fluctuates with demand and you pay only for the resources used Trend Toward the Cloud Local and regional governments are adopting cloud computing however identifying the correct projects to migrate can be overwhelming Applications that deliver increased return on investment (ROI) through reduced operational costs or deliver increased business results should be at the top of the priority list Applications are either critical or strategic —if they do not fit into either category they should be removed from the priority list Instead categorize applications that aren’t strategic or critical as legacy applications and determine if they need to be replaced or in some cases eliminated Figure 1: Focus Areas for Successful Cloud Projects When considering the AWS cloud for citizen services local and regional governments must first make sure that their IT plans align with their organizations’ business model Having a solid understanding of the core competencies of your organization will help you identify the areas that are best served through an external infrastructure such as the AWS cloud The following example shows how a city is using the AWS cloud to deliver more with less and reduc e costs City of McKinney City of McKinney Texas Turns to AWS to Deliver More Advanced Services for Less Money The City of McKinney Texas about 15 miles north of Dallas and home to 155000 people was ranked the No 1 Best Place to live in 2014 by Money Magazine The city’s IT department is going allin on AWS and uses the platform to run a wide range of services and applications such as its land management and records management systems By using AWS the city’s IT department can focus on Save on costs and provide efficiencies over current solutions Improve outcomes of existing services Capitalize on the advantages of moving to the cloud ArchivedAmazon Web Services – Stop Wasting Money Move Faster and Innovate January 2016 Page 7 of 16 delivering new and better services for its fastgrowing population and city employees instead of spending resources buying and maintaining IT infrastructure City of McKinney chose AWS for our ability to scale and grow with the needs of their department AWS provides an easy fit for the way they do business Without having to own the infrastructure the City of McKinney has the ability to use cloud resources to address business needs By moving from a capex to an opex model they can now return funds to critical city projects Move Faster AWS has helped over 2 000 government agencies around the world successfully identify and migrate applications to the AWS platform resulting in significant business benefits The following steps help governments identify plan and implement new citizen services that take advantage of current technology to boost efficiencies save tax dollars and deliver an excellent use r experience Business Benefits of Agile Development on AWS • Trade capital expense for variable expense ⎯ Instead of having to invest heavily in data centers and servers before you know how you’re going to use them you can pay only when you consume computing resources and pay only for how much you consume • Benefit from massive economies of scale ⎯ By using cloud computing you can achieve a lower variable cost than you can get on your own Because usage from hundreds of thousands of customers is aggregated in the cloud providers such as AWS can achieve higher economies of scale that translate into lower payasyougo prices • Stop guessing capacity ⎯ Eliminate guessing on your infrastructure capacity needs When you make a capacity decision prior to deploying an application you might end up either sitting on expensive idle resources or dealing with limited capacity With cloud computing these problems go away You can access as much or as little as you need and scale up and down as required with only a few minutes’ notice ArchivedAmazon Web Services – Stop Wasting Money Move Faster and Innovate January 2016 Page 8 of 16 • Increase speed and agility ⎯ In a cloud computing environment new IT resources are only a click away which means you reduce the time it takes to make those resources available to your developers from weeks to just minutes This results in a dramatic increase in agility for the organization since the cost and time it takes to experiment and develop is significantly lower • Stop spending money on running and maintaining data centers ⎯ Focus on projects that differentiate your business not the infrastructure Cloud computing lets you focus on your own customers rather than on the heavy lifting of racking stacking and powering your data center Pick Your Project Pick One Thing A common mistake is starting too many projects at once A good first step is to identify a critical need and focus your development efforts on that service Completing the following actions will help drive success of the new service throughout the development cycle: • Find the right resources • Get all team members on board during initial planning phases • Secure executive buyin • Clearly communicate status through regularly scheduled meetings with all stakeholders Be flexible throughout the project Periodically take a fresh look to review the progress and be open to changes that may need to be incorporated into the project plan Many organizations choose to begin their cloud experiments with either creating a test environment for a new project (since it allows rapid prototyping of multiple options) or solv ing a disaster recovery n eed given that it is not physically based in their location Below is an example of an ideal first workload to start with The City of Asheville started with a disaster recovery (DR) solution as their first workload in the cloud ArchivedAmazon Web Services – Stop Wasting Money Move Faster and Innovate January 2016 Page 9 of 16 City of Asheville The City of Asheville NC Uses AWS for Disaster Recovery Located in the Blue Ridge and Great Smoky mountains in North Carolina the City of Asheville attracts both tourists and businesses Recent disasters like Hurricane Sandy led the city’s IT department to search for an offsite DR solution Working with AWS partner CloudVelox the city used AWS to build an agile disaster recovery solution without the time and cost of investing in an onpremises data center The City of Asheville views the geographic diversity of AWS as the key component for a successful DR solution Now the City of Asheville is using AWS for economic development using tools to develop great sites that attract large businesses and job development Validate with a Proof of Concept A proof of concept (POC) demonstrates that the service under consideration is financially viable The overall objective of a POC is to find solutions to technical problems such as how systems can be integrated or throughput can be achieved with a given configuration A POC should accomplish the following: • Validate the scope of the project The project team can validate or invalidate assumptions made during the design phase to make sure that the service will meet critical requirements • Highlight areas of concern Technical teams have a clear view of potential problems during the development and test phase with the opportunity to make functional changes before the service goes live • Demonstrate a sense of momentum Projects can sometimes be slow to start By testing a small number of users acting in a “citizen role ” the POC shows both development progress and helps to establish whether the service satisfies critical requirements and delivers a good user experience King County used a POC to realize cost savings in the use case below validating the project’ s viability ArchivedAmazon Web Services – Stop Wasting Money Move Faster and Innovate January 2016 Page 10 of 16 King County King County Saves $1 Million in First Year by Archiving Data in AWS Cloud King County is the most populous county in Washington State with about 19 million residents The county needed a more efficient and costeffective solution to replace a tapebased backup system used to store information generated by 17 different county agencies It turned to AWS for longterm archiving and storage using Amazon Glacier and NetApp’s AltaVault solution which helps the county meet federal security standards including HIPAA and the Criminal Justice Information Services (CJIS) regulations The county is saving about $1 million in the first year by not having to replace outdated servers and projects; an annual savings of about $200000 by reducing operational costs related to data storage King County selected AWS due to the mature services and rich feature set that is highly available secure cost competitive and easy to use King County has a longterm vision to shift to a virtual data center based on cloud computing Manage the Scope Defining the scope of your cloud migration or cloud application development project is key to success Often when developing new citizen services there is a desire to address all citizen needs with a single project while insufficient resources and changing definitions (requirements scope timeframes purpose deliverables and lack of appropriate management support) add to the challenge With a flexible cloud computing environment it is possible to tightly focus on a single issue develop an application that addresses that need and then iterate upon it with updates while the application is in flight This can minimize the impact of these issues allowing realworld piloting and improvements Since processes are always linked to other processes any unplanned changes affect these other interfacing processes With just a little structure and some checkpoints most of the major changes in scope can be avoided Start with a project that will involve a limited number of users This will allow you to control and manage the service development and production process more efficiently and effectively To get started select a service and define scope using the following actions: ArchivedAmazon Web Services – Stop Wasting Money Move Faster and Innovate January 2016 Page 11 of 16 • Define terms related to the project • Involve the right people in defining the scope • Accurately define processes • Define process boundaries explicitly • Outline high level interfaces between processes • Conduct a health check on the process interfaces • Realize that certain aspects of the project still make it too large to manage By minimizing the project scope local and regional governments can reduce development and administrative costs as well as achieve time savings Release Minimally V iable P roduct and Iterate When is the right time to release a citizen service? If released too soon it may lack necessary functionality and deliver a poor user experience If it is too elegant developers may spend too much time on functionality Releasing a minimally viable service and then iterating based on feedback can be an effective design process when designing citizen services With this approach you still guide the development but an iterative process allows citizens to provide feedback to help shape the functionality before it is locked down Only the local or regional government knows the “minimum” With no upfront costs and the ability to scale the cloud allows for this to happen quickly and easily from anywhere with device independence By the time the citizens access the site IT has already made several iterations so the public sees a more mature site It’s more productive to release early This minimizes development work on functionality that citizens do not want Most people are happy to help test the service to make sure that it meets their needs Additionally this stress testing will help uncover bugs that need to be fixed before the site goes into production This will help meet the ultimate goal: an excellent user experience The City of Boston is an example of how a city released a minimally viable product and continued to iterate on the product to get the best version for the needs of their citizens ArchivedAmazon Web Services – Stop Wasting Money Move Faster and Innovate January 2016 Page 12 of 16 City of Boston Quickly Identifies Road Conditions that Need Immediate Attention and Repair The City of Boston with technology partner Connected Bits has created the Street Bump program to drive innovative scalable technology to tackle tough local government challenges They are using AWS to propel machine learning with an app that uses a smartphone’s sensors – including the GPS and accelerometers to capture enough (big) data to identify bumps and disturbances that motorists experience while they drive throughout the city The big data collected helps the Boston’s Public Works Department to better understand roads streets and areas that require immediate attention and long term repair They have chosen AWS to create a scalable open and robust infrastructure that allows for this information to flow to and from city staff via the Open311 API This solution was created as a large multitenant softwa reasaservice platform so other cities can also leverage the same repository creating one data store for all cities Several other cities are interested in testing the next version Take Advantage of New Innovations Engage Your Citizens in Crowdsourcing The idea of soliciting customer input is not new Crowdsourcing has become an important business approach to define solutions to problems By tapping into the collective intelligence of the public local and regional government can validate service requirements prior to a lengthy design phase Crowdsourcing can improve both the productivity and creativity of your IT staff while minimizing design development and testing expenses Let the citizens do the work—after all they are the ones who will be using the service Make sure it is designed to meet their requirements Two example s of using crowdsourcing to provide realtime updates to the citizens are Moovit and Transport of London ArchivedAmazon Web Services – Stop Wasting Money Move Faster and Innovate January 2016 Page 13 of 16 Moovit With AWS Moovit Now Proc esses 85 million Requests Each Da y Moovit headquartered in Israel is redefining the transit experience by giving people the realtime information they need to get to places on time With schedules trip planning navigation and crowdsourced reports Moovit guides transit riders to the best most efficient routes and makes it easy for locals and visitors to navigate the world's cities Since launching in 2012 Moovit's free awardwinning app for iPhone Android and Windows Phone serves nearly 10 million users and is adding more than a million new users every month The app is available across 400 cities in 35 countries including the US Canada France Spain Italy Brazil and the UK Moovit’s goal was to continue to add metros quickly and it needed a solution that would scale j ust as fast Moovit now uses AWS to host and deliver services for its public transportation tripplanning app — using Amazon CloudFront to rapidly deliver information to its users The company made the decision to use AWS because it has servers that can handle the app’s heavy request volume and different types of information and because it supports multiple databases including SQL and NoSQL and includes storage options Transport for London Transport for London Creates an Open Data Ecosystem with Amazon 4 Web Services with AWS Transport for London ( TfL) has been running its flagship tflgovuk website on AWS for over a year and serves over 3 million page views to between 600000 and 700000 visitors a day with 54% of visits coming from mobile devices TfL has been able to scale interactive services to this level (its previous site was static) by leveraging AWS services as an elastic buffer between its backoffice services and the 76% of London’s 84 million population that uses the site regularly to plan their journeys Enhanced personalization for customers is now available on this site; in parallel the department is fostering closer relationships with the thirdparty app and portal providers that contribute digital solutions of their own for London’s trave lers based on TfL’s (openly licensed) transport data TfL has chosen to ArchivedAmazon Web Services – Stop Wasting Money Move Faster and Innovate January 2016 Page 14 of 16 release this data under an open data license which has helped to establish an ecosystem of thirdparty developers also working on digital travelrelated projects Some 6000 developers are now engaged in digital projects using TfL’s anonymized open data spawning 360 mobile apps to date Automate Critical Functions for Citizens People are more connected to each other than ever before and the increased connectivity of devices creates new opportunities for the public sector to truly become hubs of innovation driving technology solutions to help improve citizens' lives The Internet of Things (IoT) is the everexpanding network of physical “things” that can connect to the Inte rnet and the information that they transfer without requiring human interaction “Things” in the IoT sense refer to a wide variety of devices embedded with electronics software sensors and network connectivity which enable them to collect and exchange data over the Internet AWS is working with local and regional governments to apply IoT capabilities and solutions to opportunities and challenges that face our customers While the possibilities for IoT are virtually endless the following diagram highlights use cases we are discussing with customers today Figure 2: Internet of Things Use Cases for Local and Regional Governments London City Airport IoT Technologies Enhance Customer Experience at London City Airport The ‘Smart Airport Experience’ project was funded by the government run Technology Strategy Board in the UK and implemented at London City Airport working with a Transportation Public Safety Health & WellBeing • Parking solutions • Connected smart intersections • Smart routing / navigation • Fleet tracking / monitoring • Crowd control / management • Officer safety • Emergency notification • Security solutions • Air / particle quality • Water control management • Trash / garbage collection • Lighting control • Water metering City Services • Infrastructure monitoring • Building automation systems ArchivedAmazon Web Services – Stop Wasting Money Move Faster and Innovate January 2016 Page 15 of 16 technology team led by Living PlanIT SA The goal of the project was to demonstrate how Internet of Things technologies could be used to both enhance customer experiences and improve operational efficiency at a popular business airport that already offers fast checkin to boarding times The project used the Living PlanIT Urban Operating System (UOS™) hosted in an AWS environment as the backbone for realtime data collection processing analytics marshaling and event management Start Your Journey AWS provides a number of important benefits to local and regional governments as the platform for running citizen services and infrastructure programs It provides a range of flexible cost effective scalable elastic and secure capabilities that you can use to manage citizen data in the AWS cloud Work with AWS Government & Education Experts Your dedicated Government and Education team includes solutions architects business developers and partner managers ready to help you get started solving business problems with AWS Get in touch with us to start building solutions » Support AWS customers can choose from a range of support options including our hands on support for enterprise IT environments Learn more about AWS support options » Professional Services AWS has a worldclass professional services team that can help you get more from your cloud deployment It's easy to build solutions using our toolsets but when you need help building complex solutions or migrating from an on premises environment we're there Talk to your Government & Education Experts to learn more about professional services from AWS » ArchivedAmazon Web Services – Stop Wasting Money Move Faster and Innovate January 2016 Page 16 of 16 Contributors The following individuals and organizations contributed to this document: • Frank DiGiammarino General Manager AWS State and Local Government • Carina Veksler Public Sector Solutions AWS Public Sector SalesVar
General
AWS_Certifications_Programs_Reports_and_ThirdParty_Attestations
ArchivedAWS C ertifications Programs R eports and ThirdParty Attestations March 2017 This paper has been archived For the latest information see A WS Services in Scope by Compliance ProgramArchived © 201 7 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contract ual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Contents CJIS 1 CSA 1 Cyber Essentials Plus 2 DoD SRG Levels 2 and 4 2 FedRAMP SM 3 FERPA 3 FIPS 140 2 4 FISMA and DIACAP 4 GxP 4 HIPAA 5 IRAP 6 ISO 9001 6 ISO 27001 7 ISO 27017 8 ISO 27018 8 ITAR 9 MPAA 9 MTCS Tier 3 Certification 10 NIST 10 PCI DSS Level 1 11 SOC 1/ISAE 3 402 11 SOC 2 13 SOC 3 14 Further Reading 15 Document Revisions 15 Archived Abstract AWS engages with external certifying bodies and independent auditors to provide customers with considerable information regarding the policies processes and controls established and operated by AWS ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 1 CJIS AWS complies with the FBI's Criminal Justice Inf ormation Services (CJIS) standard We sign CJIS security agreements with our customers including allowing or performing any required employee background checks according to the CJIS Security Policy Law enforcement customers (and partners who manage CJI) are taking advantage of AWS services to improve the security and protection of CJI data using the advanced security services and features of AWS such as activity logging ( AWS CloudTrail ) encryption of data in motion and at rest (S3’s Server Side Encryption with the option to bring your own key) comprehensive key management and protection ( AWS Key Management Service and CloudHSM ) and integrated permission management (IAM federated identity management multi factor authentication) AWS has created a Criminal Justice Information Services (CJIS) Workbook in a security plan template format aligned to the CJIS Policy Areas Additionally a CJIS Whitepaper has been developed to help guide customers in their journey to cloud adoption Visit the CJIS Hub Page at https://awsamazoncom/compliance/cjis/ CSA In 2011 the Cloud Security Alliance (CSA) launched STAR an initiative to encourage transparency of security practices within cloud providers The CSA Security Trust & Assurance Registry (STAR) is a free pub licly accessible registry that documents the security controls provided by various cloud computing offerings thereby helping users assess the security of cloud providers they currently use or are considering contracting with AWS is a CSA STAR registrant and has completed the Cloud Security Alliance (CSA) Consensus Assessments Initiative Questionnaire (CAIQ) This CAIQ published by the CSA provides a way to reference and document what security controls exist in AWS’ Infrastructure as a Service offerings The CAIQ provides 298 questions a cloud consumer and cloud auditor may wish to ask of a cloud provider See CSA Consensus Assessments Initiative Questionnaire ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 2 Cyber Essentials P lus Cyber Essentials Plus is a UK Government backed industry supported certification scheme introduced in the UK to help organizations demonstrate operational security against common cyber attacks It demonstrates the baseline controls AWS implements to mitigate the risk from common Internet based threats within the context of the UK Government's " 10 Steps to Cyber Security " It is backed by industry including the Federation of Small Businesses the Confederation of British Industry and a number of insurance organizations that offer incentives for businesses holding this certificatio n Cyber Essentials sets out the necessary technical controls; the related assurance framework shows how the independent assurance process works for Cyber Essentials Plus certification through an annual external assessment conducted by an accredited assess or Due to the regional nature of the certification the certification scope is limited to EU (Ireland) region DoD SRG Levels 2 and 4 The Department of Defense (DoD) Cloud Security Model (SRG) provides a formalized assessment and authorization process for cloud service providers (CSPs) to gain a DoD Provisional Authorization which can subsequently be leveraged by DoD customers A Provisional Authorization under the SRG provides a reusabl e certification that attests to our compliance with DoD standards reducing the time necessary for a DoD mission owner to assess and authorize one of their systems for operation on AWS AWS currently holds provisional authorizations at Levels 2 and 4 of th e SRG Additional information of the security control baselines defined for Levels 2 4 5 and 6 can be found at http://iasedisamil/cloud_security/Pages/indexaspx Visit the DoD Hub Page at https://awsamazoncom/compliance/dod/ ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 3 FedRAMPsm AWS is a Federal Risk and Authorization Management Program (FedRAMPsm) Compliant Cloud Service Provider AWS has completed th e testing performed by a FedRAMPsm accredited Third Party Assessment Organization (3PAO) and has been granted two Agency Authority to Operate (ATOs) by the US Department of Health and Human Services (HHS) after demonstrating compliance with FedRAMPsm requi rements at the Moderate impact level All US government agencies can leverage the AWS Agency ATO packages stored in the FedRAMPsm repository to evaluate AWS for their applications and workloads provide authorizations to use AWS and transition workload s into the AWS environment The two FedRAMPsm Agency ATOs encompass all US regions (the AWS GovCloud (US) region and the AWS US East/West regions) For a complete list of the services that are in the accreditation boundary for the regions stated above see the AWS Services in Scope by Compliance Program page ( https://awsamazoncom/compliance/services inscope/ ) For more information on AWS FedRAMPsm compliance please see the AWS FedRA MPsm FAQs at https://awsamazoncom/compliance/fedramp/ FERPA The Family Educational Rights and Privacy Act (FERPA) (20 USC § 1232g; 34 CFR Part 99) is a Federal law that protects the privacy of student education records The law applies to all schools that receive funds under an applicable program of the US Department of Education FERPA gives parents certain rights with respect to their children's education records These rights transfer to the student when he or she reaches the age of 18 or attends a school beyond the high school level Students to whom the rights have transferred are "eligible students" AWS enables c overed entities and their business associates subject to FERPA to leverage the secure AWS environment to process maintain and store protected education information AWS also offers a FERPA focused whitepaper for customers interested in learning more about how they can leverage AWS for the processing and storage of educational data ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 4 The FERPA Compliance on AWS whitepaper outlines how companies can use AWS to process systems that facilitate FERPA compliance: FIPS 1402 The Federal Information Processing Standard (FIPS) Publication 1402 is a US government security standard that specifies the security requirements for cryptographic modules protecting sensitive information To support customers with FIPS 140 2 requirements SSL terminations in AWS GovCloud (US) operate using FIPS 140 2 validated hardware AWS works with AWS GovCloud (US) customers to provide the information they need to help manage compliance when using the AWS GovCloud (US) environment FISMA and DIACAP AWS enables US government agencies to achieve and sustain compliance with the Federal Information Security Management Act ( FISMA ) The AWS infrastructure has been evaluated by independent assessors for a variety of government systems as part of their system owners' approval process Numerous Federal Civilian and Department of Defense (DoD) organizations have s uccessfully achieved security authorizations for systems hosted on AWS in accordance with the Risk Management Framework (RMF) process defined in NIST 800 37 and DoD Information Assurance Certification and Accreditation Process ( DIACAP ) GxP GxP is an acronym that refers to the regulations and guidelines applicable to life sciences organizations that make food and medical products such as drugs medical devices and medical software applications The overall intent of GxP requirements is to ensure that food and medical products are safe for consumers and to ensure the integrity of data used to make product related safety decisions AWS offers a GxP whitepaper which details a comprehensive approach for using AWS for GxP systems This whitepaper provides guidance for using AWS Products in the context of GxP and the content has been developed in conjunction with AWS pharmaceutical and medical device customers as well as ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 5 software partners who are curre ntly using AWS Products in their validated GxP systems For more information on the GxP on AWS please contact AWS Sales and Business Development For additional information ple ase see our GxP Comp liance FAQs at https://awsamazoncom/compliance/gxp part 11annex 11/ HIPAA AWS enables covered entities and their business associates subject to the US Health Insurance Portabilit y and Accountability Act (HIPAA) to leverage the secure AWS environment to process maintain and store protected health information and AWS will be signing business associate agreements with such customers AWS also offers a HIPAA focused whitepaper for customers interested in learning more about how they can leverage AWS for the processing and storage of health information The Architecting for HIPAA Secur ity and Compliance on Amazon Web Services whitepaper outlines how companies can use AWS to process systems that facilitate HIPAA and Health Information Technology for Economic and Clinical Health (HITECH) compliance Customers who execute an AWS BAA may use any AWS service in an account designated as a HIPAA Account but they may only process store and transmit PHI using the HIPAA eligible services defined in the AWS BAA For a complete list of these services see the HIPAA Eligible Services Reference page (https://awsamazoncom/compliance/hipaa eligible services reference/) AWS maintains a standards based risk management program to ensure that the HIPAA eligible servic es specifically support the administrative technical and physical safeguards required under HIPAA Using these services to store process and transmit PHI allows our customers and AWS to address the HIPAA requirements applicable to the AWS utility based operating model For additional information please see our HIPAA Compliance FAQs and Architecting for HIPAA Security and Compliance on Amazon Web Services ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 6 IRAP The Information Security Registered Assessors Program (IRAP) enables Australian government customers to validate that appropriate controls are in place and determine the appropria te responsibility model for addressing the needs of the Australian Signals Directorate (ASD) Information Security Manual (ISM) Amazon Web Services has completed an independent assessment that has determined all applicable ISM controls are in place relating to the processing storage and transmission of Unclassified (DLM) for the AWS Sydney Region For more information see the IRAP Compli ance FAQs at https://awsamazoncom/compliance/irap/ and AWS alignment with the Australian Signals Directorate (ASD) Cloud Computing Security Considerations ISO 9001 AWS has achieved ISO 9001 certification AWS’ ISO 9001 certification directly supports customers who develop migrate and operate their quality controlled IT systems in the AWS cloud Customers can leverage AWS’ compliance reports as evidence for their own ISO 9001 programs and industry specific quality programs such as GxP in life sciences ISO 13485 in medical devices AS9100 in aerospace and ISO/TS 16949 in automotive AWS customers who don't have quality system requirements will still benefit from the additional assurance and transparency that an ISO 9001 certification provides The ISO 9001 certification covers the quality management system over a specified scope of AWS services and Regions of operations For a complete list of services see the AWS Services in Scope by Compliance Program page (https://awsamazoncom/compliance/services inscope/ ) ISO 9001:2008 is a global standard for managing the quality of products and services The 9001 standard outlines a quality management system based on eight principles defined by the International Organization for Standardization (ISO) Technical Committee for Quality Management and Quality Assurance They include: • Customer focus ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 7 • Leadership • Involvement of people • Process approach • System approach to management • Continual Improvement • Factual approach to decision making • Mutually beneficial supplier relationships The AWS ISO 9001 certification can be downloaded at https://d0awsstaticcom/certifications/iso_9001_certificationpdf AWS provides additional information and frequently asked questions abou t its ISO 9001 certification at: https://awsamazoncom/compliance/iso 9001 faqs/ ISO 27001 AWS has achieved ISO 27001 certification of our Information Security Management System (ISMS) cove ring AWS infrastructure data centers and services For a complete list of services see the AWS Services in Scope by Compliance Program page ( https://awsamazoncom/compliance/services in scope/ ) ISO 27001/27002 is a widely adopted global security standard that sets out requirements and best practices for a systematic approach to managing company and customer informati on that’s based on periodic risk assessments appropriate to ever changing threat scenarios In order to achieve the certification a company must show it has a systematic and ongoing approach to managing information security risks that affect the confidentiality integrity and availability of company and customer information This certification reinforces Amazon’s commitment to providing significant information regarding our security controls and practices The AWS ISO 27001 certification can be downloaded at https://d0awsstaticcom/certifications/iso_27001_global_certificationpdf ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 8 AWS provides additional information and frequently asked questions about its ISO 27001 certification at: https://awsamazoncom/compliance/iso 27001 faqs/ ISO 27017 ISO 27017 is the newest code of practice released by the International Organization for Standardization (ISO) It provides implementation guidance on information security controls that specifically relate to cloud services AWS has achieved ISO 27017 certification of our Information Security Management System (ISMS) covering AWS infrastructure data centers and services For a complete list of services see the AWS Services in Scope by Compliance Program page ( https://aws amazoncom/compliance/services in scope/ ) The AWS ISO 27017 certification can be downloaded at https://d0awsstaticcom/certifications/iso_27017_certificationpdf AWS pr ovides additional information and frequently asked questions about its ISO 27017 certification at https://awsamazoncom/compliance/iso 27017 faqs/ ISO 27018 ISO 27018 is the first Internat ional code of practice that focuses on protection of personal data in the cloud It is based on ISO information security standard 27002 and provides implementation guidance on ISO 27002 controls applicable to public cloud Personally Identifiable Informatio n (PII) It also provides a set of additional controls and associated guidance intended to address public cloud PII protection requirements not addressed by the existing ISO 27002 control set AWS has achieved ISO 27018 certification of our Information Sec urity Management System (ISMS) covering AWS infrastructure data centers and services For a complete list of services see the AWS Services in Scope by Compliance Program page ( https://awsamazoncom/compliance/services in scope/ ) ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 9 The AWS ISO 27018 certification can be downloaded at https://d0awsstaticcom/certifications/iso_27018_certificationpdf AWS provides additional information and frequently asked questions about its ISO 27018 certification at https://awsamazo ncom/compliance/iso 27018 faqs/ ITAR The AWS GovCloud (US) region supports US International Traffic in Arms Regulations ( ITAR ) compliance As a part of managing a comprehensive ITAR compliance program companies subject to ITAR export regulations must control unintended exports by restricting access to protected data to US Persons and restricting physical location of th at data to the US AWS GovCloud (US) provides an environment physically located in the US and where access by AWS Personnel is limited to US Persons thereby allowing qualified companies to transmit process and store protected articles and data subject t o ITAR restrictions The AWS GovCloud (US) environment has been audited by an independent third party to validate the proper controls are in place to support customer export compliance programs for this requirement MPAA The Motion Picture Association of America (MPAA) has established a set of best practices for securely storing processing and delivering protected media and content ( http://wwwfightfilmtheftorg/facility security programhtml ) Media companies use these best practices as a way to assess risk and security of their content and infrastructure AWS has demonstrated alignment with the MPAA best practices and the AWS infrastructure is compliant with all applicable MPAA i nfrastructure controls While the MPAA does not offer a “certification” media industry customers can use the AWS MPAA documentation to augment their risk assessment and evaluation of MPAA type content on AWS See the AWS Compliance MPAA hub page for additional details at https://awsamazoncom/compliance/mpaa/ ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 10 MTCS Tier 3 Certification The Multi Tier Cloud Security (MTCS) is an operational Singapore security management Standard (SPRING SS 584:2013) based on ISO 27001/02 Information Security Management System (ISMS) standards The certification assessment requires us to: • Systematically evaluate our information security risks taking into account the impact of company threats and vulnerabili ties • Design and implement a comprehensive suite of information security controls and other forms of risk management to address company and architecture security risks • Adopt an overarching management process to ensure that the information security controls meet the our information security needs on an ongoing basis View the MTCS Hub Page at https://awsamazoncom/compliance/aws multitiered cloud security standard certification/ NIST In June 2015 The National Institute of Standards and Technology (NIST) released guidelines 800 171 "Final Guidelines for Protecting Sensitive Government Information Held by Contractors" This guidance is applicable to the pro tection of Controlled Unclassified Information (CUI) on nonfederal systems AWS is already compliant with these guidelines and customers can effectively comply with NIST 800 171 immediately NIST 800 171 outlines a subset of the NIST 800 53 requirements a guideline under which AWS has already been audited under the FedRAMP program The FedRAMP Moderate security control baseline is more rigorous than the recommended requirements established in Chapter 3 of 800 171 and includes a significant number of security controls above and beyond those required of FISMA Moderate systems that protect CUI data A detailed mapping is available in the NIST Special Publication 800 171 starting on page D2 (which is page 37 in the PDF) ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 11 PCI DSS Level 1 AWS is Level 1 compliant under the Payment Card Industry (PCI) Data Security Standard (DSS) Customers can run applicati ons on our PCI compliant technology infrastructure for storing processing and transmitting credit card information in the cloud In February 2013 the PCI Security Standards Council released PCI DSS Cloud Computing Guidelines These guidelines provide customers who are managing a cardholder data environment with considerations for maintaining PCI DSS controls in the cloud AWS has incorporated the PCI DSS Cloud Computing Guidelines into the AWS PCI Compliance Package for customers The AWS PCI Compliance Package includes the AWS PCI Attestation of Compliance (AoC) which shows that AWS has been successfully validated against standards applicable to a Level 1 service provider under PCI DSS Version 31 and the AWS PCI Responsibility Summary which explains how compliance responsibilities are shared between AWS and our customers in the cloud For a complete list of services in scope for PCI DSS Level 1 see the AWS Services in Scope by Comp liance Program page (https://awsamazoncom/compliance/services inscope/ ) For more information see https://awsa mazoncom/compliance/pci dsslevel 1faqs/ SOC 1/ISAE 3402 Amazon Web Services publishes a Service Organization Controls 1 (SOC 1) Type II report The audit for this report is conducted in accordance with American Institute of Certified Public Accountants (AICPA): AT 801 (formerly SSAE 16) and the International Standards for Assurance Engagements No 3402 (ISAE 3402) This dual standard report is intended to meet a broad range of financial auditing requirements for US and international auditing bodies The SOC 1 report audit attests that AWS’ control objectives are appropriately designed and that the individual controls defined to safeguard customer data are operating effectively This report is the replacement of the Statement on Auditing Standards No 70 (SAS 70) Type II Audit report ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 12 The AWS SOC 1 control objectives are provided here The report itself identifies the control activities that support each of these objectives and the independent auditor’s results of their testing procedures of each control Objective Area Objective Description Security Organization Controls provide reasonable assurance that information security policies have been implemented and communicated throughout the organization Employee User Access Controls provide reasonable assurance that procedures have been established so that Amazon employee user accounts are added modified and deleted in a timely manner and reviewed on a periodic basis Logical Security Controls provide reasonable assurance that policies and mechanisms are in place to appropriately restrict unauthorized internal and external access to data and customer data is appropriately segregated from other customers Secure Data Handling Controls provide reasonable assurance that data handling between the customer’s point of initiation to an AWS storage location is secured and mapped accurately Physical Security and Environmental Protection Controls provide reasonable assurance that physical access to data centers is restricted to authorized personnel and that mechanisms are in place to minimize the effect of a malfunction or physical disaster to data center facilities Change Management Controls provide reasonable assurance that changes (including emergency / non routine and configuration) to existing IT resources are logged authorized tested approved and documented Data Integrity Availability and Redundancy Controls provide reasonable assurance that data integrity is maintained through all phases including transmission storage and processing ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 13 Objective Area Objective Description Incident Handling Controls provide reasonable assurance that system incidents are recorded analyzed and resolved The SOC 1 reports are designed to focus on controls at a service organization that are likely to be relevant to an audit of a user entity’s financi al statements As AWS’ customer base is broad and the use of AWS services is equally as broad the applicability of controls to customer financial statements varies by customer Therefore the AWS SOC 1 report is designed to cover specific key controls li kely to be required during a financial audit as well as covering a broad range of IT general controls to accommodate a wide range of usage and audit scenarios This allows customers to leverage the AWS infrastructure to store and process critical data including that which is integral to the financial reporting process AWS periodically reassesses the selection of these controls to consider customer feedback and usage of this important audit report AWS’ commitment to the SOC 1 report is ongoing and AWS w ill continue the process of periodic audits For the current scope of the SOC 1 report see the AWS Services in Scope by Compliance Program page (https://awsamazoncom/compliance/services inscope/ ) SOC 2 In addition to the SOC 1 report AWS publishes a Service Organization Controls 2 (SOC 2) Type II report Similar to the SOC 1 i n the evaluation of controls the SOC 2 report is an attestation report that expands the evaluation of controls to the criteria set forth by the American Institute of Certified Public Accountants (AICPA) Trust Services Principles These principles define l eading practice controls relevant to security availability processing integrity confidentiality and privacy applicable to service organizations such as AWS The AWS SOC 2 is an evaluation of the design and operating effectiveness of controls that meet the criteria for the security and availability principles set forth in the AICPA’s Trust Services Principles criteria This report provides additional transparency into AWS security and availability based on a pre defined industry standard of leading pract ices and further demonstrates AWS’ commitment to protecting ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 14 customer data The SOC 2 report scope covers the same services covered in the SOC 1 report See the SOC 1 description above for the in scope services SOC 3 AWS publishes a Service Organization Co ntrols 3 (SOC 3) report The SOC 3 report is a publically available summary of the AWS SOC 2 report The report includes the external auditor’s opinion of the operation of controls (based on the AICPA’s Security Trust Principles included in the SOC 2 report) the assertion from AWS management regarding the effectiveness of controls and an overview of AWS Infrastructure and Services The AWS SOC 3 report includes all AWS data center s worldwide that support in scope services This is a great resource for customers to validate that AWS has obtained external auditor assurance without going through the process to request a SOC 2 report The SOC 3 report scope covers the same services covered in the SOC 1 report See the SOC 1 description above for the in scope services View the AWS SOC 3 report here ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 15 Further Reading For additional information see the following sources: • AWS Risk and Compliance Overview • AWS Answers to Key Compliance Questions • CSA Consensus Assessments Initiative Questionnaire Document Revisions Date Description March 2017 Updated in scope services January 2017 Migrated to new template January 2016 First publication
General
Web_Application_Hosting_in_the_AWS_Cloud_Best_Practices
Web Application Hosting in the AWS Cloud First Published May 2010 Updated August 20 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved Contents An overview of traditional web hosting 1 Web application hosting in the cloud using AWS 2 How AWS can solve common web application hosting issues 2 An AWS Cloud architecture for web hosting 4 Key components of an AWS web hosting architecture 6 Key considerations when using AWS for web hosting 16 Conclusion 18 Contributors 19 Further reading 19 Document versions 19 Abstract Traditional on premises web architectures require complex solutions and accurate reserved capacity forecast in order to ensure reliability Dense peak traffic periods and wild swings in traffic patterns result in low utilization rates of expensive hardware This yields high operating costs to maintain idle hardware and an inefficient use of capital for underused hardware Amazon Web Services (AWS) provides a reliable scalable secure and highly performing infrastructure for the most demanding web applic ations This infrastructure matches IT costs with customer traffic patterns in near real time This whitepaper is meant for IT Managers and System Architects who want to understand how to run traditional web architectures in the clou d to achieve elasticity scalability and reliabilityAmazon Web Services Web Appli cation Hosting in the AWS Cloud Page 1 An overview of traditional web hosting Scalable web hosting is a well known problem space The following image depicts a traditional web hosting architecture that implements a common three tier web application model In this model the architecture is separated into presentation application and persistence layers Scalability is provided by adding hosts at these layers The architecture also has built in performance failover and availability feature s The traditional web hosting architecture is easily ported to the AWS Cloud with only a few modifications A traditional web hosting architecture Amazon Web Services Web Application Hosting in the AWS Cloud Page 2 The following sections look at why and how such an architecture should be and could be deployed in the AW S Cloud Web application hosting in the cloud using AWS The first question you should ask concerns the value of moving a classic web application hosting solution into the AWS Cloud If you decide that the cloud is right for you you’ll need a suitable architecture This section helps you evaluate an AWS Cloud solution It compares deploying your web application in the cloud to an on premises deployment presents an AWS Cloud architecture for hosting your application and discusses the key components of the AWS Cloud Architecture solution How AWS can solve commo n web application hosting issues If you’re responsible for running a web application you could face a variety of infrastructure and architectural issues for which AWS can provide seamless and cost effective solutions The f ollowing are some of the benefit s of using AWS over a traditional hosting model A costeffective alternative to oversized fleets needed to handle peaks In the traditional hosting model you have to provision servers to handle peak capacity Unused cycles are wasted outside of peak periods Web applications hosted by AWS can leverage on demand provisioning of additional servers so you can constantly adjust capacity and costs to actual traffic patterns For example the following graph shows a web application with a usage peak from 9AM to 3PM and less usage for the remainder of the day An automatic scaling approach based on actual traffic trends which pr ovisions resources only when needed would result in less wasted capacity and a greater than 50 percent reduction in cost Amazon Web Services Web Application Hosting in the AWS Cloud Page 3 An example of wasted capacity in a classic hosting model A scalable solution to handling unexpected traffic peaks A more dire consequence of the slow provisioning associated with a traditional hosting model is the inability to respond in time to unexpected traffic spikes There are a number of stories about web applications becoming unavailable because of an unexpecte d spike in traffic after the site is mentioned in popular media In the AWS Cloud the same on demand capability that helps web applications scale to match regular traffic spikes can also handle an unexpected load New hosts can be launched and are readily available in a matter of minutes and they can be taken offline just as quickly when traffic returns to normal An ondemand solution for test load beta and preproduction environments The hardware costs of building and maintaining a traditional hosting environment for a production web application don’t stop with the production fleet Often you need to create preproduction beta and testing fleets to ensure the quality of the web application at each stage of the development lifecycle While you can mak e various optimizations to ensure the highest possible use of this testing hardware these parallel fleets are not always used optimally and a lot of expensive hardware sits unused for long periods of time Amazon Web Services Web Application Hosting i n the AWS Cloud Page 4 In the AWS Cloud you can provision testing fle ets as and when you need them This not only eliminates the need for pre provisioning resources days or months prior to the actual usage but gives you the flexibility to tear down the infrastructure components when you do not need them Additionally you can simulate user traffic on the AWS Cloud during load testing You can also use these parallel fleets as a staging environment for a new production release This enables quick switchover from current production to a new application version with little or no service outages An AWS Cloud architecture for web hosting The following figure provides another look at that classic web application architecture and how it can leverage the AWS Cloud computing infrastructure Amazon Web Services Web Application Hosting in the AWS Cloud Page 5 An example of a web hosting architecture on AWS Amazon Web Services Web Application Hosting in the AWS Cloud Page 6 1 DNS services with Amazon Route 53 – Provides DNS services to simplify domain management 2 Edge caching with Amazon CloudFront – Edge caches high volume content to decrease the latency to customers 3 Edge security for Amazon CloudFront with AWS WAF – Filters malicious traffic including cross site scripting ( XSS) and SQL injection via customer defined rules 4 Load balancing with Elastic Load Balancing (ELB) – Enables you to spread load across multiple Availabilit y Zones and AWS Auto Scaling groups for redundancy and decoupling of services 5 DDoS protection with AWS Shield – Safeguards your infrastructure against the most common network and transport layer DDoS attacks automatically 6 Firewalls with security groups – Moves security to the instance to provide a stateful host level firewall for both web and application servers 7 Caching with Amazon ElastiCache – Provides caching services with Redis or Memcached to remove load from the app and database and lower latency for frequent requests 8 Managed database with Amazon Relational Database Service (Amazon RDS) – Crea tes a highly available multiAZ database architecture with six possible DB engines 9 Static storage and backups with Amazon Simple Storage Service (Amazon S3) – Enables simple HTTP based object storage for backup s and static assets like images and video Key components of an AWS web hosting architecture The following sections outline some of the key components of a web hosting architecture deployed in the AWS Cloud and explain how they differ from a traditional web hosting architecture Amazon Web Services Web Application Hosting in the AWS Cloud Page 7 Network management In the AWS Cloud the ability to segment your network from that of other customers enables a more secure and scalable architecture While security groups provide host level security (see the Host security section) Amazon Virtual Private Cloud (Amazon VPC) enables you to launch resources in a logically isolated and virtual network that you define Amazon VPC is a service tha t gives you full control over the details of your networking setup in AWS Examples of this control include creating internet subnets for web servers and private subnets with no internet access for your databases Amazon VPC enables you to create hybrid a rchitectures by using hardware virtual private networks (VPNs) and use the AWS Cloud as an extension of your own data center Amazon VPC also includes IPv6 support in addition to traditional IPv4 support for your network Content delivery When your web traffic is geo dispersed it’s not always feasible and certainly not cost effective to replicate your entire infrastructure across the globe A Content Delivery Network (CDN) provides you the ability to utilize its global network of edge locations to deliver a cached copy of web content such as videos webpages images and so on to your customers To reduce response time the CDN utilizes the nearest edge location to the customer or originating request location to reduce the response time Throughput is dramatically increased given that the web assets are delivered from cache For dynamic data many CDNs can be configured to retrieve data from the origin servers You can use CloudFront to deliver your website including dynamic static and streaming content using a global network of edge locations CloudFront automatically routes requests for your conte nt to the nearest edge location so content is delivered with the best possible performance CloudFront is optimized to work with other AWS services like Amazon S3 and Amazon Elastic Compute Cloud (Amazon EC2) CloudFront also works seamlessly with any origin server that is not an AWS origin server which stores the original definitive versions of your files Like other AWS services there are no contracts or monthly co mmitments for using CloudFront – you pay only for as much or as little content as you actually deliver through the service Amazon Web Services Web Application Hosting in the AWS Cloud Page 8 Additionally any existing solutions for edge caching in your web application infrastructure should work well in the AWS Cloud Mana ging public DNS Moving a web application to the AWS Cloud requires some Domain Name System (DNS) changes To help you manage DNS routing AWS provides Amazon Route 53 a highly available and scalable cloud DNS web service Route 53 is designed to give developers and businesses an extremely reliable and cost effective way to route end users to internet applications by translating names such as “wwwexamplecom ” into numeric IP addresses such as 192021 that computers use to connect to each other Route 53 is fully compliant with IPv6 as well Host security In addition to inbound network traffi c filtering at the edge AWS also recommends web applications apply network traffic filtering at the host level Amazon EC2 provides a feature named security groups A security group is analogous to an inbound ne twork firewall for which you can specify the protocols ports and source IP ranges that are allowed to reach your EC2 instances You can assign one or more security groups to each EC2 instance Each security group allows appropriate traffic in to each i nstance Security groups can be configured so that only specific subnets IP addresses and resources have access to an EC2 instance Alternatively they can reference other security groups to limit access to EC2 instances that are in specific groups In the AWS web hosting architecture in Figure 3 the security group for the web server cluster might allow access only from the web layer Load Balancer and only over TCP on ports 80 and 443 (HTTP and HTTPS) The application server security group on the other hand might allow access only from the application layer Load Balancer In this model your support engineers would also need to access the EC2 instances what can be achieved with AWS Systems Manager Session Manager For a deeper discussion on security the AWS Cloud Security which contains security bulletins certification information and security whitepapers that explain the security capabilities of AWS Amazon Web Services Web Application Hosting in the AWS Cloud Page 9 Load balancing across clusters Hardware load balancers are a common network appliance used in traditional web application architectures AWS provides this capability through the Elastic Load Balancing (ELB) service ELB automa tically distributes incoming application traffic across multiple targets such as Amazon EC2 instances containers IP addresses AWS Lambda functions and virtual appliances It can handle the varying load of your application traffic in a single Availability Zone or across multiple Availability Zones Elastic Load Balancing offers four types of load balancers that all feature the high availability automatic scaling and robust security necessary to make your applications fault tolerant Finding other hosts and services In the traditional web hosting architecture most of your hosts have static IP addresses In the AWS Cloud most of your hosts have dynamic IP addresses Although every EC2 instance can have bot h public and private DNS entries and will be addressable over the internet the DNS entries and the IP addresses are assigned dynamically when you launch the instance They cannot be manually assigned Static IP addresses (Elastic IP addresses in AWS termi nology) can be assigned to running instances after they are launched You should use Elastic IP addresses for instances and services that require consistent endpoints such as primary databases central file servers and EC2 hosted load balancers Caching within the web application Inmemory application caches can reduce load on services and improve performance and scalability on the database tier by caching frequently used information Amazon ElastiCache is a web service that makes it easy to deploy operate and scale an in memory cache in the cloud You can configure the in memory cache you create to automatically scale with load and to automatically replace failed nodes ElastiCache is protocol complian t with Memcached and Redis which simplifies cloud migrations for customers running these services on premises Database configuration backup and failover Many web applications contain some form of persistence usually in the form of a relational or non relational database AWS offers both relational and non relational Amazon Web Services Web Application Hosting in the AWS Cloud Page 10 database services Alternatively you can deploy your own database software on an EC2 instance The following table summarizes these options which are discuss ed in greater detail in this section Table 1 — Relational and non relational database solutions Relational database solutions Nonrelational database solutions Managed database service Amazon RDS for MySQL Oracle SQL Server MariaDB PostgreSQL Amazon Aurora Amazon DynamoDB Amazon Keyspaces Amazon Neptune Amazon QLDB Amazon Timestream Selfmanaged Hosting a relational database management system ( DBMS ) on an Amazon EC2 instance Hosting a non relational database solution on an EC2 instance Amazon RDS Amazon RDS gives you access to the capabilities of a familiar MySQL PostgreSQL Oracle and Microsoft SQL Server database engine The code applications and tools that yo u already use can be used with Amazon RDS Amazon RDS automatically patches the database software and backs up your database and it stores backups for a userdefined retention period It also supports point intime recovery You can benefit from the flexi bility of being able to scale the compute resources or storage capacity associated with your relational database instance by making a single API call Amazon RDS Multi AZ deployments increase your database availability and protect your database against unp lanned outages Amazon RDS Read Replicas provide read only replicas of your database so you can scale out beyond the capacity of a single database deployment for read heavy database workloads As with all AWS services no upfront investments are required and you pay only for the resources you use Amazon Web Services Web Application Hosting in the AWS Cloud Page 11 Hosting a relational database management system (RDBMS) on an Amazon EC2 instance In addition to the managed Amazon RDS offering you can install your choice of RDBMS (such as MySQL Oracle SQL Server or DB2) on an EC2 instance and manage it yourself AWS customers hosting a database on Amazon EC2 successfully use a variety of primary/standby and replication models including mirroring for read only copies and log shipping for always ready passive standbys When managing your own database software directly on Amazon EC2 you should also consider the availability of fault tolerant and persistent storage For this purpose we recommend that databases running on Amazon EC2 use Amazon Elastic Block Store (Amazon EBS) volumes which are similar to network attached storage For EC2 instances running a database you should place all database data and logs on EBS volumes These will remain available even if the database h ost fails This configuration allows for a simple failover scenario in which a new EC2 instance can be launched if a host fails and the existing EBS volumes can be attached to the new instance The database can then pick up where it left off EBS volumes automatically provide redundancy within the Availability Zone If the performance of a single EBS volume is not sufficient for your databases needs volumes can be striped to increase input/output operations per second ( IOPS ) performance for your database For demanding workloads you can also use EBS Provisioned IOPS where you specify the IOPS required If you use Amazon RDS the service manages its own storage so you can focus on managing your data Nonrelational databases In addition to support for r elational databases AWS also offers a number of managed nonrelational databases : • Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scala bility Using the AWS Management Console or the DynamoDB API you can scale capacity up or down without dow ntime or performance degradation Because DynamoDB handles the administrative burdens of operating and scaling distributed databases to AWS Amazon Web Services Web Application Hosting in the AWS Cloud Page 12 you don’t have to worry about hardware provisioning setup and configuration replication software patching or cl uster scaling • Amazon DocumentDB (with MongoDB compatibility) is a database service that is purpose built for JSON data management at scale fully managed and runs on AWS and enterprise ready with high durability • Amazon Keyspaces (for Apache Cassandra ) is a scalable highly available and managed Apache Cassandra compatible database service With Amazon Keyspaces you can run your Cassandra workloads on AWS using the same Cassandra application code and developer tools that you use today • Amazon Neptune is a fast reliable fully managed graph database service that makes it easy to build and run applications that work with highly connected datasets The core of Amazon Neptune is a purpose built high performance graph database engine optimized for storing billions of relationships and querying the graph with milliseconds latency • Amazon Quantum Le dger Database (QLDB) is a fully managed ledger database that provides a transparent immutable and cryptographically verifiable transaction log owned by a central trusted authority Amazon QLDB can be used to track each and every application data change and maintains a complete and verifiable history of changes over time • Amazon Timestream is a fast scalable and serverless time series database service for IoT and operational applications that makes it ea sy to store and analyze trillions of events per day up to 1000 times faster and at as little as 1/10th the cost of relational databases Additionally you can use Amazon EC2 to host other non relational database technologies you may be working with Storage and backup of data and assets There are numerous options within the AWS Cloud for storing accessing and backing up your web application data and assets Amazon S3 provides a highly available and redundant object store S3 is a great storage solution for static objects such as images videos and other static media S3 also supports edge caching and streaming of these assets by interacting with CloudFront Amazon Web Services Web Application Hosting in the AWS Cloud Page 13 For atta ched file system like storage EC2 instances can have EBS volumes attached These act like mountable disks for running EC2 instances Amazon EBS is great for data that needs to be accessed as block storage and that requires persistence beyond the life of t he running instance such as database partitions and application logs In addition to having a lifetime that is independent of the EC2 instance you can take snapshots of EBS volumes and store them in S3 Because EBS snapshots only back up changes since th e previous snapshot more frequent snapshots can reduce snapshot times You can also use an EBS snapshot as a baseline for replicating data across multiple EBS volumes and attaching those volumes to other running instances EBS volumes can be as large as 1 6TB and multiple EBS volumes can be striped for even larger volumes or for increased input/output ( I/O) performance To maximize the performance of your I/O intensive applications you can use Provisioned IOPS volumes Provisioned IOPS volumes are designe d to meet the needs of I/O intensive workloads particularly database workloads that are sensitive to storage performance and consistency in random access I/O throughput You specify an IOPS rate when you create the volume and Amazon EBS provisions that rate for the lifetime of the volume Amazon EBS currently supports IOPS per volume ranging from maximum of 16000 (for all instance types) up to 64000 ( for instances built on Nitro System ) You can stripe multiple volumes together to deliver thousands of IOPS per instance to your application Apart from this for higher throughput and mission critical workloads requiring sub millisecond latency y ou can use io2 block express volume type which can support up to 256000 IOPS with a maximum storage capacity of 64TB Automatically scaling the fleet One of the key differences between the AWS Cloud architecture and the traditional hosting model is that A WS can automatically scale the web application fleet on demand to handle changes in traffic In the traditional hosting model traffic forecasting models are generally used to provision hosts ahead of projected traffic In AWS instances can be provisioned on the fly according to a set of triggers for scaling the fleet out and back in The Auto Scaling service can create capacity groups of servers that can grow or shrink on demand Auto Scaling also works directly with Amazon CloudWatch for metrics data Amazon Web Services Web Application Hosting in the AWS Cloud Page 14 and with Elastic Load Balancing to add and remove hosts for load distribution For example if the web servers are reporting greater than 80 percent CPU utilization over a period of time an additional web server could be quickly deployed and then automatically added to the load balancer for immediate inclusion in the load balancing rotation As shown in the AWS web hosting architecture model you can create multiple Auto Scaling groups for different layers of the architecture so that each layer can scale independently For example the web server Auto Scaling group might trigger scaling in and out in response to changes in network I/O whereas the application server Auto Scaling group might scale out and in according to CPU utilization You can set minimums and maximums to help ensure 24/7 availability and to cap the usage within a group Auto Scaling triggers can be set both to grow and to shrink the total fleet at a given layer to match resource utilizatio n to actual demand In addition to the Auto Scaling service you can scale Amazon EC2 fleets directly through the Amazon EC2 API which allows for launching terminating and inspecting instances Additional security features The number and sophistication of Distributed Denial of Service (DDoS) attacks are rising Traditionally these attacks are difficult to fend off They often end up being costly in both mitigation time and power spent as well as the opportunity cost from lost visits to your website dur ing the attack There are a number of AWS factors and services that can help you defend against such attacks One of them is the scale of the AWS network The AWS infrastructure is quite large and enables you to leverage our scale to optimize your defense Several services including Elastic Load Balancing Amazon CloudFront and Amazon Route 53 are effective at scaling your web application in response to a large increase in traffic The infrastructure protection services in particular help with your defense strategy : • AWS Shield is a managed DDoS protection service that helps safeguard against various forms of DDoS attack vectors The standard offering of AWS Shield is free and automatically active throughout your account Th is standard offering helps to defend against the most common network and transportation layer attacks In addition to this level the advanced offering grants higher levels of Amazon Web Services Web Application Hosting in the AWS Cloud Page 15 protection against your web application by providing you with near real time visibility into an ongoing attack as well as integrating at higher levels with the services mentioned earlier Additionally you get access to the AWS DDoS Response Team (DRT) to help mitigate large scale and sophisticated attacks against your resources • AWS WAF (Web Application Firewall) is designed to protect your web applications from attacks that can compromise availability or security or otherwise consume excessive resources AWS WAF works in line with CloudFr ont or Application Load Balancer along with your custom rules to defend against attacks such as cross site scripting SQL injection and DDoS As with most AWS services AWS WAF comes with a fully featured API that can help automate the creation and edit ing of rules for your AWS WAF instance as your security needs change • AWS Firewall Manager is a security management service which allows you to centrally configure and manage firewall rules across y our accounts and applications in AWS Organizations As new applications are created Firewall Manager makes it easy to bring new applications and resources into compliance by enforcing a common set of s ecurity rules Failover with AWS Another key advantage of AWS over traditional web hosting is the Availability Zones that give you easy access to redundant deployment locations Availability Zones are physically distinct locations that are engineered to be insulated from failures in other Availability Zones They provide inexpensive low latency network connectivity to other Availability Zones in the same AWS Region As the AWS web hosting architecture diagram shows AWS recommend s that you depl oy EC2 hosts across multiple Availability Zones to make your web application more fault tolerant It’s important to ensure that there are provisions for migrating single points of access across Availability Zones in the case of failure For example you s hould set up a database standby in a second Availability Zone so that the persistence of data remains consistent and highly available even during an unlikely failure scenario You can do this on Amazon EC2 or Amazon RDS with the click of a button Amazon Web Services Web Application Hosting in the AWS Cloud Page 16 While s ome architectural changes are often required when moving an existing web application to the AWS Cloud there are significant improvements to scalability reliability and cost effectiveness that make using the AWS Cloud well worth the effort The next sect ion discuss es those improvements Key considerations when using AWS for web hosting There are some key differences between the AWS Cloud and a traditional web application hosting model The previous section highlighted many of the key areas that you should consider when deploying a web application to the cloud This section points out some of the key architectural shifts that you need to consider when you bring any application into the cloud No more physical network appliances You cannot deploy physi cal network appliances in AWS For example firewalls routers and load balancers for your AWS applications can no longer reside on physical devices but must be replaced with software solutions There is a wide variety of enterprise quality software solu tions whether for load balancing or establishing a VPN connection This is not a limitation of what can be run on the AWS Cloud but it is an architectural change to your application if you use these devices today Firewalls everywhere Where you once had a simple demilitarized zone (DMZ ) and then open communications among your hosts in a traditional hosting model AWS enforces a more secure model in which every host is locked down One of the s teps in planning an AWS deployment is the analysis of traffic between hosts This analysis will guide decisions on exactly what ports need to be opened You can create security groups for each type of host in your architecture You can also create a large variety of simple and tiered security models to enable the minimum access among hosts within your architecture The use of network access control lists within Amazon VPC can help lock down your network at the subnet level Amazon Web Services Web Appli cation Hosting in the AWS Cloud Page 17 Consider the availability of multiple data centers Think of Availability Zones within an AWS Region as multiple data centers EC2 instances in different Availability Zones are both logically and physically separated and they provide an easy touse model for deploying your application across data centers for both high availability and reliability Amazon VPC as a Regional service enables you to leverage Availability Zones while keepi ng all of your resources in the same logical network Treat hosts as ephemeral and dynamic Probably the most important shift in how you might architect your AWS application is that Amazon EC2 hosts should be considered ephemeral and dynamic Any applicatio n built for the AWS Cloud should not assume that a host will always be available and should be designed with the knowledge that any data in the EC2 instant stores will be lost if an EC2 instance fails When a new host is brought up you shouldn’t make ass umptions about the IP address or location within an Availability Zone of the host Your configuration model must be flexible and your approach to bootstrapping a host must take the dynamic nature of the cloud into account These techniques are critical fo r building and running a highly scalable and fault tolerant application Consider containers and serverless This whitepaper primarily focuses on a more traditional web architecture However consider modernizing your web applications by moving to Containers and Serverless technologies leveraging services like AWS Fargate and AWS Lambda to enable you to abstracts away the use of virtual machines to perform compute tasks With serverless computing infrastructure management tasks like capacity provisioning and patching are handled by AWS so you can build mor e agile applications that allow you to innovate and respond to change faster Amazon Web Services Web Application Hosting in the AWS Cloud Page 18 Consider automated deployment • Amazon Lightsail is an easy touse virtual private server (VPS) that offers you everything needed to build an application or website plus a cost effective monthly plan Light sail is ideal for simpler workloads quick deployments and getting started on AWS It’s designed to help you start small and then scale as you grow • AWS Elastic Beanstalk is an easy touse service for deploying and scaling web applications and services developed with Java NET PHP Nodejs Python Ruby Go and Docker on familiar servers such as Apache NGINX Passenge r and IIS You can simply upload your code and Elastic Beanstalk automatically handles the deployment capacity provisioning load balancing auto matic scaling and application health monitoring At the same time you retain full control over the AWS res ources powering your application and can access the underlying resources at any time • AWS App Runner is a fully managed service that makes it easy for developers to quickly deploy containerized web applicat ions and APIs at scale and with no prior infrastructure experience required Start with your source code or a container image App Runner automatically builds and deploys the web application and load balances traffic with encryption App Runner also scale s up or down automatically to meet your traffic needs • AWS Amplify is a set of tools and services that can be used together or on their own to help front end web and mobile developers build scalable full sta ck applications powered by AWS With Amplify you can configure app backends and connect your app in minutes deploy static web apps in a few clicks and easily manage app content outside the AWS Management C onsole Conclusion There are numerous architectural and conceptual considerations when you are contemplating migrating your web application to the AWS Cloud The benefits of having a cost effective highly scalable and fault tolerant infrastructure that grows with your business far outstrips the efforts of migrating to the AWS Cloud Amazon Web Services Web Application Hosting in the AWS Cloud Page 19 Contributors The following individuals and organizations contributed to this document: • Amir Khairalomoum Senior Solutions Architect AWS • Dinesh Subramani Senior Solutions Architect AWS • Jack Hemion Senior Solut ions Architect AWS • Jatin Joshi Cloud Support Engineer AWS • Jorge Fonseca Senior Solutions Architect AWS • Shinduri K S Solutions Architect AWS Further reading • Deploy Django based application onto Amazon LightSail • Deploying a hig h availability Drupal website to Elastic Beanstalk • Deploying a high availability PHP application to Elastic Beanstalk • Deploying a Nodejs application with DynamoDB to Elastic Beanstalk • Getting Started with Linux Web Applications in the AWS Clou d • Host a Static Website • Hosting a static website using Amazon S3 • Tutorial: Deploying an ASPNET core application with Elastic Beanstalk • Tutorial: How to deploy a NET sample application using Elastic Beanstalk Document version s Date Description August 20 2021 Multiple sections and diagrams updated with new services features and updated service limits Amazon Web Services Web Application Hosting i n the AWS Cloud Page 20 Date Description September 2019 Updated icon label for “Caching with ElastiCache” July 2017 Multiple sections added and updated for new services Updated diagrams for additional clarity and services Addition of VPC as the standard networking method in AWS in “Network Management” Adde d section on DDoS protection and mitigation in “Additional Security Features” Added a small section on serverless architectures for web hosting September 2012 Multiple sections updated to improve clarity Updated diagrams to use AWS icons Addition of “Managing Public DNS” section for detail on Amazon Route 53 “Finding Other Hosts and Services” section updated for clarity “Database Configuration Backup and Failover” section updated for clarity and DynamoDB “Storage and Backup of Data an d Assets” section expanded to cover EBS Provisioned IOPS volumes May 2010 First publication
General
Core_Tenets_of_IoT
ArchivedCore Tenets of IoT July 2017 This paper has been archived For the latest technical content about the AWS Cloud see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchived© 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its c ustomers ArchivedContents Overview 1 Core Tenets of IoT 2 Agility 2 Scalability and Global Footprint 2 Cost 3 Security 3 AWS Services for IoT Solutions 4 AWS IoT 4 Event Driven Services 6 Automation and DevOps 7 Administration and Security 8 Bringing Services and Solutions Together 9 Pragma Architecture 10 Summary 11 Contributors 12 Further Reading 12 ArchivedAbstract This paper outlines core tenets that should be consider ed when developing a strategy for the Internet of Things (IoT) The paper help s customers understand the benefits of Amazon Web Services (AWS) and how the AWS cloud platform can be the critical component supporting the core tenets of an IoT solution The paper also provides an overview of AWS services that should be part of an overall IoT strat egy This paper is intended for decision makers who are learning about Internet of Things platforms ArchivedAmazon Web Services – Core Tenets of IoT Page 1 Overview One of the value propositions of an Internet of Things (IoT) strategy is the ability to provide insight into context that was previously invisibl e to the business But before a business can develop a strategy for IoT it need s a platform that meets the foundational principles of an IoT solution AWS believes in some basic freedoms that are driving organizational and economic benefits of the cloud into businesses These freedoms are why more than a million customers already use the AWS platform to support virtually any cloud workload These freedoms are also why the AWS pla tform is proving itself as the primary catalyst to any Internet of Things strategy across commercial consumer and industrial solutions AWS customers working across such a spectrum of solutions have identified core tenets vital to the success of any IoT platform T hese core tenets are agility scale cost and security ; which have been shown as essential to the long term success of any IoT strategy This whitepaper defines the se tenets as:  Agility – The freedom to quickly analyze execute and build business and technical initiatives in an unfettered fashion  Scale – Seamlessly expand infrastructure regionally or globally to meet operational demands  Cost – Understand and control the costs of operating an IoT platform  Security – Secure communication from device through cloud while maintaining compliance and iterating rapidly By using the AWS platform companies are able to build agile solution s that can scale to meet exponential device growth with an ability to manage cost while building on top of s ome of the most secure computing infrastructure in the world A company that selects a platform that has these freedoms and promotes these core tenets will improve organizational focus on the differentiators of its business and the strategic value of imple menting solutions within the Internet of Things ArchivedAmazon Web Services – Core Tenets of IoT Page 2 Core Tenets of IoT Agility A leading benefit companies seek when creating an IoT solution is the ability to efficiently quantify opportunities These opportunities are derived from reliable sensor data remote diagnostics and remote command and control between users and devices Companies that can effectively collect these metrics open the door to explore different business hypotheses based on their IoT data For example manufacturers can build predic tive analytics solutions to measure test and tune the ideal maintenance cycle for their products over time The IoT lifecycle is comprised of multiple stages that are required to procure manufacture onboard test deploy and manage large fleets of phy sical devices When developing physical devices the waterfall like process introduces challenges and friction that can slow down business agility This friction coupled with the upfront hardware costs of developing and deploying physical assets at scale often result in the requirement to keep devices in the field for long periods of time to achieve the necessary return on investment (ROI) With the ever growing challenges and opportunities that face companies today a company’s IT division is a competiti ve differentiator that supports business performance product development and operations In order for a company’s IoT strategy to be a competitive advantage the IT organization relies on having a broad set of tools that promote interoperability througho ut the IoT solution and among a heterogeneous mix of devices Companies that can achieve a successful balance between the waterfall processes of hardware releases and the agile metho dologies of software development can continuously optimize the value that’s derived from their IoT strategy Scalability and Global Footprint Along with an exponential growth of connected devices each thing in the Internet of Things communicates packets of data that require reliable connectivity and durable storage Prior to cloud platforms IT departments would procure additional hardware and maintain underutilized overprovisioned capacity in order to handle the increasing growth of data emitted by devices also known as telemetry With IoT an organization is challenged with managing monitoring and securing the immense number of network connections from these dispersed connected devices ArchivedAmazon Web Services – Core Tenets of IoT Page 3 In addition to scaling and growing a solution in one regional location IoT solutions require the ability to scale globally and across different physical locations IoT solutions should be deployed in multiple physical locations to meet the business objectives of a global enterprise solution such as data compliance data sovereignty and lower communication latency for better respo nsiveness from devices in the field Cost Often the greatest value of an IoT solution is in the telemetric and context ual data that is generated and sent from devices Building onpremise infrastructure requires upfront capital purchase of hardware ; it can be a large fixed expense that does not directly correlate to the value of the telemetry that a device will produce sometime in the future To balance the need to receive telemetry today with an uncertain value derived from telemetr ic data in the future an IoT strategy should leverage an elastic and scalable cloud platform With the AWS platform a company pays only for the services it consumes without requiring a long term contract By leveraging a flexible consumption based pricing model the cost of a n IoT solution and the related infrastructure can be directly accessed alongside the business value delivered by ingesting processing storing and analyzing the telemetr y received by that same IoT solution Security The foundation of an IoT solution st arts and ends with security Since d evices may send large amounts of sensitive data and end users of IoT application s may also have the ability to directly control a device the security of things must be a pervasive design requirement IoT solutions shoul d not just be designed with security in mind but with security controls permeating every layer of the solution Security is not a static formula ; IoT applications must be able to continuously model monitor and iterate on security best practices In the Internet of Things the attack surface is different than traditional web infrastructure The pervasiveness of ubiquitous computing means that IoT vulnerabilities could lead to exploits that result in the loss of life for example from a compromised control system for gasoline pipelines or power grids A competing dynamic for IoT security is the lifecycle of a physical device and the constrained hardware for sensors microcontrollers actuators and embedded libraries These constrained factors may limit the security capabilities each ArchivedAmazon Web Services – Core Tenets of IoT Page 4 device can perform With these additional dynamics IoT solutions must continuously adapt their architecture firmware and software to stay ahead of the changing security landscape Although the constrained factors of devices can present increased risks hurdles and potential tradeoffs between security and cost building a secure IoT solution must be the primary objective for any organization AWS Services for IoT Solutions The AWS platform provides a foundation for executing an agile scalable secure and cost effective IoT strategy In order to achieve the business value that IoT can bring to an organization customers should evaluate the breadth and depth of AWS services that are common ly used in large scale distr ibuted IoT deployments AWS provides a range of services to accelerate time to market: from device SDKs for embedded software to real time data processing and event driven compute services In these sections we will cover the most common AWS services used in IoT applications and how these services correspond to the core tenets of an IoT solution AWS IoT The Internet of Things cannot exist without things Every IoT solution must first establish connectivity in order to begin interacting with devices AWS IoT is an AWS managed service that addresses the challenges of connecting managing and operating large fleets of devices for an application The combination of scalability of connectivity and security mechanisms for data transmission within AWS IoT provides a foundation for IoT communication as part of an IoT solution Once data has been sent to AWS IoT a solution is able to leverage an ecosystem of AWS services spanning databases mobile services big data analytics machine learning and more Device Gateway A device gateway is responsible for maintaining the sessions and subscriptions for all connected devices in an IoT solution The AWS IoT Device Gateway enables secure bi directional communication between connected devices and the AWS platf orm over MQTT Web Sock ets and HTTP Communication protocols such as MQTT and HTTP enable a company to utilize industry ArchivedAmazon Web Services – Core Tenets of IoT Page 5 standard protocol s instead of using a proprietary protocol that would limit future interoperability As a publish and subscribe protoco l MQTT inherently encourages scalable fault tolerant communication patterns and fosters a wide range of communication options among devices and the Device Gateway These message patterns range from communication between two devices to broadcast pattern s where one device can send a message to a large field of devices over a shared topic In addition the MQTT protocol exposes different levels of Quality of Service (QoS) to control the retransmission and delivery of message s as they are published to subscr ibers The combination of p ublish and subscribe with QoS not only opens the possibilities for IoT solutions to control how devices interact in a solution but also drive more predictability in how messages are delivered acknowledged and retried in the ev ent of network or device failures Shadows Device Registry and Rules Engine AWS IoT consists of additional features that are essential to building a robust IoT application The AWS IoT service includes the R ules Engine which is capable of filtering transforming and forwarding device messages as they are received by the Device Gateway The Rules Engine utilizes a SQL based syntax that selects data from message payloads and triggers actions based on the characteristics of the IoT data AWS IoT also provi des a Device Shadow that maintains a virtual representation of a device The Device Shadow acts as a message channel to send commands reliably to a device and store the last known state of a device in the AWS platform For managing the lifecycle of a fleet of devices AWS IoT has a Device Registry The Device Registry is the central location for storing and querying a predefined set of attributes related to each thing The Device Registry supports the creation of a holistic management view for an IoT solution to control the associations between things shadows permissions and identities Security and Identity For connected devices an IoT platform should utilize concepts of identity least privilege encryption and authorization throughout the hardware and software development lifecycle AWS IoT encrypts traffic to and from the service over Transport Layer Security (TLS) with support for most major cipher suites For identification AWS IoT requires a connected d evice to authenticate using a X509 certificate Each certificate must be provisioned activated and then ArchivedAmazon Web Services – Core Tenets of IoT Page 6 installed on a device before it can be used as a valid identity with AWS IoT In order to support this separation of identity and access for devices AWS IoT provides IoT Policies for device identities AWS IoT also utilizes AWS Identity and Access Management ( AWS IAM) policies for AWS users groups and roles By using IoT Policies an organization has control over allowing and denying communication s on IoT topics for each specific device’s identity AWS IoT policies certificates and AWS IAM are designed for explicit whitelist configur ation of the communication channels of every device in a company’s AWS IoT ecosystem Event Driven Services In order to achieve the tenets of scalability and flexibility in an IoT solution an organization should incorporate the techniques of an event driven architecture An e vent driven architecture fosters scalable and decoupled communication through the creat ion storage consumption and reaction to events of interest that occur in an IoT solution Messages that are generated in an IoT solution should first be categorized and mapped to a series of events A n IoT solution should then associate these events with business logic that execute s commands and possibly generate s additional events in the IoT system The AWS platform provides several application services for building a distributed event driven IoT architecture Foundationally event driven architectures rely on the ability to durably store and transfer events through an ecosystem of interested subscribers In order to support decoupled event orchestration the AWS platform has several application services that are designed for reliable event storage and highly scalable event driven computation An event driven IoT solution should utilize Amazon Simple Queue Service ( Amazon SQS) Amazon Simple Notification Service ( Amazon SNS ) and AWS Lambda as foundational applica tion components for creat ing simple and complex event workflow s Amazon SQS is a fast durable scalable and fully managed message queuing service Amazon SNS is a web service that publishes messages from an application and immediately delivers them to su bscribers or other applications AWS Lambda is designed to run code in response to events while the underlying computer resources are automatically managed AWS Lambda can receive and respond to notifications directly from other AWS services In an event driven IoT architecture AWS Lambda is where the business logic is executed to determine when events of interest have occurred in the context of an IoT ecosystem ArchivedAmazon Web Services – Core Tenets of IoT Page 7 AWS services such as Amazon SQS Amazon SNS and AWS Lambda can separate the consuming of events from the processing and business logic applied to t hose events This separation of responsibilities creates flexibility and agility in an end toend solution This separation enables the rapid modification of event trigger logic or the logic used t o aggregate contextual data between parts of a system Finally this separation allows changes to be introduce d in an IoT solution without blocking the continuous stream of data being sent between end devices and the AWS platform Automation and DevOps In IoT solutions the initial release of an application is the beginning of a long term approach to constant ly refine the business advantages of an IoT strategy After the first release of an application a majority of time and effort will be spent adding new features to the current IoT solution With the tenet of remaining agile throughout the solution lifecycle customers should evaluate services that enable rapid development and deployment as business needs change Unlike traditional web architectures where DevOps technologies only apply to the backend servers an IoT application will also require the ability to incrementally roll out changes to disparate globally connected devices With the AWS platfo rm a company can implement server side and device side DevOps practices to automate operation s Applications deployed in the AWS cloud platform can take advantage of several DevOps technologies on AWS For an overview of AWS DevOps we recommend reviewing the document Introduction to DevOps on AWS 1 Although most solutions will differ in deployment and operations requirements IoT solutions can utilize AWS CloudFormation to define th eir server side infrastructure as code Infrastructure treated as code h as the benefits of being reproducible testable and more easily deployable across other AWS regions Enterprise organizations that utilize AWS CloudFormation in addition to other DevOps tools greatly increase their agility and pace of application changes In order to design an IoT so lution that adheres to the tene ts of security and agility organizations must also update their connected devices after they have been deployed into the environment Firmware updates provide a company a mechanism to ad d new features to a device and are a critical path for delivering security patches during the lifetime of a device To implement firmware updates to connected devices an IoT solution should first store the firmware in a ArchivedAmazon Web Services – Core Tenets of IoT Page 8 globally accessible service such as Amazon Simple Storage Service (Amazon S3) for secure durable highly scalable cloud storage Then the IoT solution can implement Amazon CloudFront a global content delivery network (CDN) service to bring the the firmware stored in Amazon S3 to the lower latency points of presence for connected devices Finally a customer can leverage the AWS IoT Shadow to push a command to a device to request that it download the new version of firmware from a pre signed Amazon CloudFront URL that restricts access to the firmware objects available through the CDN Once the upgrade is complete the device should acknowledge success by sending a message back into the IoT solution By orchestrating this small set of services for firmware updates customers control their Device DevOps approach and can scale it in a way that aligns with their overall IoT strategy In IoT automation and DevOps procedures expand beyond the application services that are deployed in the AWS platform and include the connected devices that have been deployed as part of the overall IoT architecture By designing a system that can easily perform regular and global updates for new software changes and firmware changes organizations can iterate on ways to increase value from their IoT solution and t o continuously innovate as new market opportunities arise Administration and Security Security in IoT is more than data anonymization; it is the ability to have insight auditability and control throughout a system IoT security includes the capability to monitor events throughout the solution and react to those events to achieve the desired compliance and governance Security at AWS is our number one priority Through the AWS Shared Responsibility Model an organization has the flexibil ity agility and control to implement their security requirements 2 AWS manages the security of the cloud while customers are responsible for sec urity in the cloud Customers maintain control o ver what security mechanisms they implement to protect their data applications devices systems and networks In addition companies can leverage the broad set of security and administrative tools that AWS and AWS partners provide to create a strong logically isolated and secure IoT solution for a fleet of devi ces The first service that should be enabled for monitoring and visibility is AWS CloudTrail AWS CloudTrail is a web service that records AWS API calls for an account and delivers log files to Amazon S3 After enabling AWS CloudTrail a ArchivedAmazon Web Services – Core Tenets of IoT Page 9 solution should build security and governance processes that are based on the realtime input from API calls made across an AWS account AWS CloudTrail provides an additional level of visibility and flexibility in creating and iterating on operational openness in a system In addition to logging API calls customers should enable Amazon CloudWatch for all AWS services used in the system Amazon CloudWatch allows applications to monitor AWS metrics and create custom metrics generated by an application These metrics can th en trigger alerts based off of those events Along with Amazon CloudWatch metrics there are Amazon CloudWatch Logs which store additional logs from AWS services or customer application s and can then trigger events based off of those additional metrics AWS services such as AWS IoT directly integrate with Amazon CloudWatch Logs; these logs can be dynamically read as a stream of data and processed using the business logic and context of the system for real time anomaly detection or security threats By pairing services like Amazon CloudWatch and Amazon CloudTrail with the capabilities of AWS IoT identities and policies a company can immediately collect valuable data around security practices at the start of the IoT strategy and meet the need s for a proa ctive implementation of security within their IoT solution Bringing Services and Solutions Together To better understand customer usage predict future trends or run an IoT fleet more efficiently an organization needs to collect and process the potentia lly vast amount of data gathered from connected devices in addition to connecting with and managing large fleets of things AWS provides a breadth of services for collecting and analyzing large scale datasets often called big data These services may be in tegrated tightly within an IoT solution to support collecting processing and analyzing the solution’s data as well as proving or disproving hypotheses based upon IoT data The ability to formulate and answer questions with the same platform one is using to manage fleets of things ultimately empowers an organization to avoid undifferentiated work and to unlock business innovations in an agile fashion ArchivedAmazon Web Services – Core Tenets of IoT Page 10 The high level cohesive architectural perspective of an IoT solution that brings IoT big data and other services together is called the Pragma Architecture The Pragma Architecture is comprised of layers of solutions:  Things The device and fleet of devices  Control Layer The control point for access to the Speed Layer and the nexus for fleet management  Speed Layer The inbound high bandwidth device telemetry data bus and the outbound device command bus  Serving Layer The access point for systems and humans to interact with th e devices in a fleet to perform analysis archive and correlate data and to use realtime views of the fleet Pragma Architecture The Pragma Architecture is a single cohesive perspective of how the core tenets of IoT manifest as an IoT solution when using AWS services One scenario of a Pragma Architecture based IoT Solution is around processing of data emitted by devices; data also known as telemetry In the diagram above after a device authenticates using a device certificate obtained from the AWS IoT service in the control layer the device regularly sends telemetry data to the AWS IoT Device G ateway in the Speed Layer That telemetry data is then processed by the IoT Ru les Engine as an event to be output by Amazon Kinesis or AWS Lambda for use by web users interacting with the serving layer ArchivedAmazon Web Services – Core Tenets of IoT Page 11 Another scenario of a Pragma Architecture based IoT Solution is to send a command to a device In the diagram above the user’s application would write the desired command value to the target device’s IoT Shadow Then the AWS IoT Shadow and the Device Gate way work together to overcome an intermittent network to convey the command to the specific device These are just two device focused scenarios from a broad tapestry of solutions that fit the Pragma Architecture Neither of these scenarios address the nee d to process the potentially vast amount of data gathered from connected devices this is where having an integrated Big Data Backend starts to become important The Big Data Backend in this diagram is congruent with the entire ecosystem of real time and b atch mode big data solutions that customers already leverage the AWS platform to create Simply put from the big data perspective IoT telemetry equals “ingest ed data” in big data solutions If you’d like to learn more about big data solutions on AWS plea se check below for a link to further reading There is a colorful and broad tapestry of big data solutions that companies have already created using the AWS platform The Pragma Architecture shows that by building an IoT solution on that same platform the entire ecosystem of big data solutions is available Summary Defining your Internet of Things strategy can be a truly transformational endeavor that opens the door for unique business innovations As organizations start striving for their own IoT innov ation s it is critical to select a platform that promotes the core tenets: business and technical agility scalability cost and security The AWS platform over delivers on the core tenets of an IoT solution by not just providing IoT services but offerin g those services alongside a broad deep and highly regarded set of platform services across a global footprint This over delivery also brings freedoms that increase your business’ control over its own destiny and enables your business’ IoT solutions to more rapidly iterate toward the outcomes sought in your IoT strategy As next steps in evaluating IoT platforms we recommend the further reading section below to learn more about AWS IoT big data solutions on AWS and customer case studies on AWS ArchivedAmazon Web Services – Core Tenets of IoT Page 12 Contributors The following individuals authored this document:  Olawale Oladehin Solutions Architect Amazon Web Services  Brett Francis Principal Solutions Architect Amazon Web Services Further Reading For additional reading please consult the following sources:  AWS IoT Service3  Getting Started with AWS IoT4  AWS Case Studies5  Big Data Analytics Options on AWS6 1 https://d0awsstaticcom/whitepapers/AWS_DevOpspdf 2 https://awsamazoncom/compliance/shared responsibility model/ 3 https://awsamazoncom/iot/ 4 https://awsamazoncom/iot/getting started/ 5 https://awsamazoncom/solutions/case studies/ 6 https://d0awsstaticcom/whitepapers/Big_Data_Analytics_Options_on_AW Spdf Notes
General
Security_Overview_of_AWS_Lambda
Security Overview of AWS Lambda An In Depth Look at AWS Lambda Security January 2021 This paper has been archived For the latest version of this document see: https://docsawsamazoncom/whitepapers/latest/ securityoverviewawslambda/welcomehtml ArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents AWS’s current product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS’s products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied AWS’s responsibilities and liabilities to its customers are controlled by AWS agreements and this document is not part of nor do es it modify any agreement between AWS and its customers © 202 1 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Abstract v Introduction 1 About AWS Lambda 1 Benefits of Lambda 2 Cost for Running Lambda Based Applications 3 The Shared Responsibility Model 3 Lambda Functions and Layers 4 Lambda Invoke Modes 5 Lambda Executi ons 6 Lambda Execution Environments 6 Execution Role 8 Lambda MicroVMs and Workers 8 Lambda Isolation Technologies 10 Storage and State 11 Runtime Maintenance in Lambda 11 Monitoring and Auditing Lambda Functions 12 Amazon CloudWatch 12 AWS CloudTrail 13 AWS X Ray 13 AWS Config 13 Architecting and Opera ting Lambda Functions 13 Lambda and Compliance 14 Lambda Event Sources 14 Conclusion 15 Contributors 15 Further Reading 16 Document Revisions 16 ArchivedAbstract This whitepaper presents a deep dive into the AWS Lambda service through a security lens It provides a well rounded picture of the service which is useful for new adopters and deepens understanding of Lambda for current users The intended audience for this whitepaper is Chief Information Security Officers (CISOs) information security groups security engineers enterprise architects compliance teams and any others interested in understanding the underpinnings of AWS Lambda ArchivedAmazon Web Services Security Overview of AWS Lambda Page 1 Introduction Today more workloads use AWS Lambda to achieve scalability performance and cost efficiency without managing the underlying computing These workloads scale to thousands of concurrent requests per second Lambda is used by hundreds of thousands of Amazon Web Services (AWS) customers to serve trillions of requests every month Lambda is suitable for mission critical applications in many industries A broad variety of customers from media and entertainment to financial services and other regulated industries take advantage of Lambda These customers decrease time to market optimize costs and improve agility by focusing on what they do best: running their business The managed runtime environment model enables Lambda to manage much of the implementation details of running serverless workloads This model further reduces the attack surface while making cloud security simpler This whitepaper presents the underpinnings of that model along with best practices to developers security analysts security and compliance teams and other stakeholders About AWS Lambda Lambda is an event driven serverless compute service that extends other AWS services with custom logic or creates backend services that operate with scale performance and security in mind Lambda can be configured to automatically run code in response to multiple events such as HTTP requests through Amazon API Gateway modifications to objects in Amazon Simple Storage Service (Amazon S3) buckets table updates in Amazon DynamoDB and state transitions in AWS Step Functions Lambda runs cod e on a highly available compute infrastructure and performs all the administration of the underlying platform including server and operating system maintenance capacity provisioning and automatic scaling patching code monitoring and logging With Lamb da you can just upload your code and configure when to invoke it; Lambda takes care of everything else required to run your code Lambda integrates with many other AWS services and enables you to create serverless applications or backend services rangin g from periodically triggered simple automation tasks to full fledged microservices applications Lambda can be configured to access resources within your Amazon Virtual Private Cloud (Amazon VPC) and by extens ion your on premises resources Lambda integrates with AWS Identity and Access Management (IAM) which you can leverage to protect your data and configure fine grained access controls using a variety ArchivedAmazon Web Services Security Overview of AWS Lambda Page 2 of access m anagement strategies while maintaining a high level of security and auditing to help you meet your compliance needs Benefits of Lambda Customers who want to unleash the creativity and speed of their development organizations without compromising their IT team’s ability to provide a scalable cost effective and manageable infrastructure find that Lambda lets them trade operational complexity for agility and better pricing without compromising on scale or reliability Lambda offers many benefits includi ng the following: No Servers to Manage Lambda runs your code on highly available fault tolerant infrastructure spread across multiple Availabi lity Zones (AZs) in a single Region seamlessly deploying code and providing all the administration maintenance and patches of the infrastructure Lambda also provides built in logging and monitoring including integration with Amazon CloudWatch CloudWatch Logs and AWS CloudTrail Continuous Scaling Lambda precisely manages scaling of your functions (or application) by running event triggered code in parallel and processing each event individually Millisecond Meterin g With Lambda you are charged for every 1 millisecond (ms) your code executes and the number of times your code is triggered You pay for consistent throughput or execution duration instead of by server unit Increases Innovation Lambda frees up your pr ogramming resources by taking over the infrastructure management allowing you to focus on innovation and development of business logic Modernize your Applications Lambda enables you to use functions with pre trained machine learning models to inject artificial intelligence into applications easily A single application programming interface (API) request can classify images analyze videos convert speech to text perform natural language processing and more ArchivedAmazon Web Services Security Overview of AWS Lambda Page 3 Rich Ecosystem Lambda supports developers through AWS Serverless Application Repository for discovering deploying and publishing serverless applications AWS Serverless Application Model for building serverless applications and integrations with various integrated development environments (IDEs) like AWS Cloud9 AWS Toolkit for Visual Studio AWS Tools for Visual Studio Team Services and several others Lambda is integrated with additional AWS services to provide you a rich ecosystem for building serverless applications Cost for Running Lambda Based Applications Lambda offers a granular payasyougo pricing model With this model you are charged based on the number of function invocations and their duration (the time it takes for the code t o run) In addition to this flexible pricing model Lambda also offers 1 million perpetually free requests per month which enables many customers to automate their process without any costs The Shared Responsibility Model At AWS security and compliance is a shared responsibility between AWS and the customer This shared responsibility model can help relieve your operational burden as AWS operates manages and controls the co mponents from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates For Lambda AWS manages the underlying infrastructure and application platform the operating system and the e xecution environment You are responsible for the security of your code and identity and access management (IAM) to the Lambda service and within your function Figure 1 shows the shared responsibility model as it applies to the common and distinct components of Lambda AWS responsibilities appear in orange and customer responsibilities appear in blue ArchivedAmazon Web Services Security Overview of AWS Lambda Page 4 Figure 1 – Shared Responsibility Model for AWS Lambda Lambda Functions and Layers With Lambda you can run code virtually with zero administration of the underlying infrastructure You are responsible only for the code that you provide Lambda and the configuration of how Lambda runs that code on your behalf Today Lambda supports two types of code resources: Functions and Layers A function is a resource which can be invoked to run your code in Lambda Functions can include a common or shared resource called Layers Layers can be used to share common code or data across different functions or AWS accounts You are respons ible for the management of all the code contained within your functions or layers When Lambda receives the function or layer code from a customer Lambda protects access to it by encrypting it at rest using AWS Key Management Service (AWS KMS) and in transit by using TLS 12+ You can manage access to your functions and layers through AWS IAM policies or through resource based permissions For a full list of supported IAM features on Lambda see AWS Services that work with IAM You can also control the entire lifecycle of your functions and layers through Lambda's control plane APIs For example you ca n choose to delete your function by calling DeleteFunction or revoke permissions from another account by calling RemovePermission ArchivedAmazon Web Services Security Overview of AWS Lambda Page 5 Lambda Invoke Modes The Invoke API can be calle d in two modes: event mode and request response mode • Event mode queues the payload for an asynchronous invocation • Request response mode synchronously invokes the function with the provided payload and returns a response immediately In both cases the function execution is always performed in a Lambda execution environment but the payload takes different paths For more information see Lamb da Execution Environments in this document You can also use other AWS services that perform invocations on your behalf Which invoke mode is used depends on which AWS service you are using and how it i s configured For additional information on how other AWS services integrate with Lambda see Using AWS Lambda with other services When Lambda receives a request response i nvoke it is passed to the invoke service directly If the invoke service is unavailable callers may temporarily queue the payload client side to retry the invocation a set number of times If the invoke service receives the payload the service then atte mpts to identify an available execution environment for the request and passes the payload to that execution environment to complete the invocation If no existing or appropriate execution environments exist one will be dynamically created in response to the request While in transit invoke payloads sent to the invoke service are secured with TLS 12+ Traffic within the Lambda service (from the load balancer down) passes through an isolated internal virtual private cloud (VPC) owned by the Lambda servi ce within the AWS Region to which the request was sent Figure 2 – Invocation model for AWS Lambda: request response Event invocation mode payloads are always queued for processing before invocation All payloads are queued for processing in an Amazon Simple Queue Service (Amazon SQS) queue Queued events are always secured in transit with TLS 12+ but they are not currently encrypted at rest The Amazon SQS queues used by Lambda are managed by the Lambda service and are not visible to you as a customer Queued events can be stored in a shared queue but may be migrated or assigned to dedicated queues depending on a number of factors that cannot be directly controlled by customers (for example rate of invoke size of events and so on) ArchivedAmazon Web Services Security Overview of AWS Lambda Page 6 Queued events are retrieved in batches by Lambda’s poller fleet The poller fleet is a group of EC2 instances whose purpose is to process queued event invocations which have not yet been processed Whe n the poller fleet retrieves a queued event that it needs to process it does so by passing it to the invoke service just like a customer would in a request response mode invoke If the invocation cannot be performed the poller fleet will temporarily sto re the event in memory on the host until it is either able to successfully complete the execution or until the number of run retry attempts have been exceeded No payload data is ever written to disk on the poller fleet itself The polling fleet can be tasked across AWS customers allowing for the shortest time to invocation For more information about which services may take the event invocation mode see Using AWS Lambda with other services Lambda Executions When Lambda executes a function on your behalf it manages both provisioning and configuring the underlying systems necessary to run your code This enables your developers to focus on business logic and writing code not administering and managing underlying systems The Lambda service is split into the control plane and the data plane Each plane serves a distinct purpose in the service The control plane provides the management APIs (for example CreateFunction UpdateFunctionCode PublishLayerVersion and so on) and manages integrations with all AWS services Communications to Lambda's control plane are protected in transit by TLS All customer data stored within Lambda's control plane is encrypted at rest through the use of AWS KMS which is designed to protect it from unauthorized disclosure or tampering The data plane is Lambda's Invoke API that triggers the invocation of Lambda functions When a Lambda function is invoked the data pla ne allocates an execution environment on an AWS Lambda Worker (or simply Worker a type of Amazon EC2 instance) to that function version or chooses an existing execution environment that has already been set up for that function version which it then uses to complete the invocation For more information see the AWS Lambda MicroVMs and Workers section of this document Lambda Execution Environments Each invocation is routed by Lambda's invoke service to an execution environment on a Worker that is able to service the request Other than through data plane customers and other users cannot directly initiate inbound/ingress network communications with an execution environment This helps to ensure that communications to your execution environment are authenticated and authorized ArchivedAmazon Web Services Security Overview of AWS Lambda Page 7 Execution environments are reserved for a specific function version and cannot be reused across function versions functions or AWS accounts This means a single function which may have two different versions would result in at least two unique execution environments Each execution environment may only be used for one concurrent invocation at a time and they may be reused across multiple invocations of the same function version for performance reasons Depending on a number of factors (for example rate of invocation function configuration and so on) one or more execution environments may exist for a given function version With this approach Lambda is able t o provide function version level isolation for its customers Lambda does not currently isolate invokes within a function version’s execution environment What this means is that one invoke may leave a state that may affect the next invoke (for example fi les written to /tmp or data in memory) If you want to ensure that one invoke cannot affect another invoke Lambda recommends that you create additional distinct functions For example you could create distinct functions for complex parsing operations whi ch are more error prone and re use functions which do not perform security sensitive operations Lambda does not currently limit the number of functions that customers can create For more information about limits see the Lambda quotas page Execution environments are continuously monitored and managed by Lambda and they may be created or destroyed for any number of reasons including but not limited to: • A new invoke arrives and no suitable execution environment exists • An internal runtime or Worker software deployment occurs • A new provisioned concurrency configuration is published • The lease time on the execution environment or the Worker is approaching or has exceeded max lifetime • Other internal workload rebalancing processes Customers can manage the number of pre provisioned execution environments that exist for a function version by configuring provisioned concurrency on their function configuration When configured to do so Lambda will create manage and ensure the configur ed number of execution environments always exist This ensures that customers have greater control over start up performance of their serverless applications at any scale Other than through a provisioned concurrency configuration customers cannot determi nistically control the number of execution environments that are created or managed by Lambda in response to invocations ArchivedAmazon Web Services Security Overview of AWS Lambda Page 8 Execution Role Each Lambda function must also be configured with an execution role which is an IAM role that is assumed by the Lambda service when performing control plane and data plane operations related to the fun ction The Lambda service assumes this role to fetch temporary security credentials which are then available as environment variables during a function’s invocation For performance reasons the Lambda service will cache these credentials and may re use them across different execution environments which use the same execution role To ensure adherence to least privilege principle Lambda recommends that each function has its own unique role and that it is configured with the minimum set of permissions it requires The Lambda service may also assume the execution role to perform certain control plane operations such as those related to creating and configuring Elastic network interfaces (ENI) for VPC functions sending logs to Amazon CloudWatch sending traces to AWS X Ray or other non invoke related operations Customers can always review and audit these use cases by reviewing audit logs in AWS CloudTrail For more information on this subject see the AWS Lambda execution role documentation page Lambda MicroVMs and Workers Lambda will create its execution environments on a fleet of EC2 instances calle d AWS Lambda Workers Workers are bare metal EC2 Nitro instances which are launched and managed by Lambda in a separate isolated AWS account which is not visible to customers Workers have one or more hardware virtualized Micro Virtual Machines (MVM) created by Firecracker Firecracker is an open source Virtual Machine Monitor (VMM) that uses Linux’s Kernel based Virtual Machine (KVM) to create and manage MVMs It is purpose built for creating and managing secure multi tenant container and function based services that provide serverless operational models For more information about Firecracker's security mode l see the Firecracker project website As a part of the shared responsibility model Lambda is responsible for maintaining the security configuration controls and patching level of the Workers The Lambda team uses AWS Inspector to discover known potential security issues as well as other custom security issue notification mechanisms and pre disclosure lists so that customers don’t need to manag e the underlying security posture of their execution environment ArchivedAmazon Web Services Security O verview of AWS Lambda Page 9 Figure 3 – Isolation model for AWS Lambda Workers Workers have a maximum lease lifetime of 14 hours When a Worker approaches maximum lease time no further invocations are routed to it MVMs are gracefully terminated and the underlying Worker instance is terminated Lambda continuously monitors and alarms on lifecycle activities of its fleet’s lifetime All data plane communications to workers are encrypted using Advanced Encryption Standard with Galois/Counter Mode (AES GCM) Other than through data plane operations customers cannot directly interact with a worker as it hosted in a network isolated Amazon VPC managed by Lambda in Lambda’s service accounts When a Worker needs to create a new execution environment it is given time limited authorization to access customer function artifacts These artifacts are specifically optimized for Lambda’s execution environment and workers Function code which is uploa ded using the ZIP format is optimized once and then is stored in an encrypted format using an AWS managed key and AESGCM Functions uploaded to Lambda using the container image format are also optimized The container image is first downloaded from its o riginal source optimized into distinct chunks and then stored as encrypted chunks using an authenticated convergent encryption method which uses a combination of AES CTR AES GCM and a SHA256 MAC The convergent encryption method allows Lambda to securely deduplicate encrypted chunks All keys required to decrypt customer data is protected using customer mana ged KMS Customer Master Key (CMK) CMK usage by the Lambda service is available to customers in AWS Clou dTrail logs for tracking and auditing ArchivedAmazon Web Services Security Overview of AWS Lambda Page 10 Lambda Isolation Technologies Lambda uses a variety of open source and proprietary isolation technologies to protect Workers and execution environments Each execution environment contains a dedicated copy of the foll owing items: • The code of the particular function version • Any AWS Lambda Layers selected for your function version • The chosen function runtime (for example Java 11 Node JS 12 Python 38 and so on) or the function's custom runtime • A writeable /tmp directory • A minimal Linux user space based on Amazon Linux 2 Execution environments are isolated from one another using several container like technologies built into the Linux kernel along with AWS proprietary isolation technologies These technolog ies include: • cgroups – Used to constrain the function's access to CPU and memor y • namespaces – Each execution environment runs in a dedicated name space We do this by having unique group process IDs user IDs network interfaces and other resources managed by the Linux kernel • seccomp bpf – To limit the system calls (syscalls) that can be used f rom within the execution environment • iptables and routing tables – To prevent ingress network communications and to isolate network connecti ons between MVMs • chroot – Provide scoped access to the underlying filesystem • Firecracker configuration – Used to rate limit block device and network device throughput • Firecracker security featur es – For more information about Firecracker's current security design please review Firecracker's latest design document Along with AWS proprietary isolation te chnologies these mechanisms provide strong isolation between execution environments ArchivedAmazon Web Services Security Overview of AWS Lambda Page 11 Storage and State Execution environments are never reused across different function versions or customers but a single environment can be reused between invocations of the same function version This means data and state can persist between invocations Data and/or state may continue to persist for hours before it is destroyed as a part of normal execution environment lifecycle management For performance reasons functi ons can take advantage of this behavior to improve efficiency by keeping and reusing local caches or long lived connections between invocations Inside an execution environment these multiple invocations are handled by a single process so any process wide state (such as a static state in Java) can be available for future invocations to reuse if the invocation occurs on a reused execution environment Each Lambda execution environment also includes a writeable filesystem available at /tmp This storage is not accessible or shared across execution environments As with the process state files written to /tmp remain for the lifetime of the execution environment This allows expensive transfer operations such as downloading machine learning (ML) models to be amortized across multiple invocations Functions that don’t want to persist data between invocations should either not write to /tmp or delete their files from /tmp between invocations The /tmp directory is backed by an EC2 instance store and is encrypted at rest Customers that want to persist data to the file system outside of the execution environment should consider using Lambda’s integration with Amazon Elastic File System (Amazon EFS) For more information see Using Amazon EFS with AWS Lambda If customers don’t want to persist data or state across invocations Lambda recommends that they do not use the execution context or execution environment to store data or state If customers want to actively prevent data or state leaking across invocations Lambda recommends that they create distinct functions for e ach state Lambda does not recommend that customers use or store security sensitive state into the execution environment as it may be mutated between invocations We recommend recalculating the state on each invocation instead Runtime Maintenance in Lamb da Lambda provides support for multiple programming languages through the use of runtimes including Java 11 Python 38 Go 1x NodeJS 12 NET core 31 and others For a complete list of currently supported runtimes see AWS Lambda Runtimes Lambda provides support for these runtimes by continuously scanning for and deploying compatible updates and security patches and by performing other runtime maintenance activity Thi s enables customers to focus on just the maintenance and security of any code included in their Function and Layer The Lambda team uses AWS Inspector to ArchivedAmazon Web Services Security Overview of AWS Lambda Page 12 discover known security issues as well as other cus tom security issues notification mechanisms and pre disclosure lists to ensure that our runtime languages and execution environment remain patched If any new patches or updates are identified Lambda tests and deploys the runtime updates without any invol vement from customers For more information about Lambda's compliance program see the Lambda and Compliance section of this document Typically no action is required to pick up the latest patches for supported La mbda runtimes but sometimes action might be required to test patches before they are deployed (for example known incompatible runtime patches) If any action is required by customers Lambda will contact them through the Personal Health Dashboard throug h the AWS account's email or through other means with the specific actions required to be taken Customers can use other programming languages in Lambda by implementing a custom runtime For custom runtimes maintenance of the runtime becomes the custome r's responsibility including making sure that the custom runtime includes the latest security patches For more information see Custom AWS Lambda runtimes in the AWS Lambda Developer Guide When upstream runtime language maintainers mark their language End OfLife (EOL) Lambda honors this by no longer supporting the runtime language version When runtime versions are marked as deprecated in Lambda Lambda stops supporting t he creation of new functions and updates to existing functions that were authored in the deprecated runtime To alert customer of upcoming runtime deprecations Lambda sends out notifications to customers of the upcoming deprecation date and what they can expect Lambda does not provide security updates technical support or hotfixes for deprecated runtimes and reserves the right to disable invocations of functions configured to run on a deprecated runtime at any time If customers want to continue to run deprecated or unsupported runtime versions they can create their own custom AWS Lambda runtime For details on when runtimes are deprecated see the AWS Lambda Runtime support policy Monitoring and Auditing Lambda Functions You can monitor and audit Lambda functions with many AWS services and methods including the following services: Amazon CloudWatch Lambda automatically monitors Lambda functions on your behalf Through Amazon CloudWatch it reports metrics such as the number of requests the execution duration per request and the number of requests resulting in an error These metrics are exposed at the function level which you can then leverage to set CloudWatch alarms ArchivedAmazon Web Services Security Overview of AWS Lambda Page 13 For a list of metrics exposed by Lambda see Working with AWS Lambda function metrics AWS CloudTrail Using AWS CloudTrail you can implement governance compliance operational auditing and risk auditing of your entire AWS account including Lambda CloudTrail enables you to log continuously monitor and retain account activity related to actions across your AWS infrastructure providing a complete event history of actions taken through the AWS Management Console AWS SDKs command line tools and other AWS services Using CloudTrail you can optionally encrypt log files using KMS and also leverage CloudTrail log file integrity validati on for positive assertion AWS X Ray Using AWS X Ray you can analyze and debug production and distributed Lambda based applications which enables you to understand the performance of your application and its u nderlying services so you can eventually identify and troubleshoot the root cause of performance issues and errors X Ray’s end toend view of requests as they travel through your application shows a map of the application’s underlying components so you can analyze applications during development and in production AWS Config With AWS Config you can track configuration changes to the Lambda functions (including deleted functions) runtime environments tags handler name code size memory allocation timeout settings and concurrency settings along with Lambda IAM execution role subnet and security group associations This gives you a holistic view of the Lambda function’s lifecycle and enables you to sur face that data for potential audit and compliance requirements Architecting and Operating Lambda Functions Now that we have discussed the foundations of the Lambda service we move on to architecture and operations For information about standard best pra ctices for serverless applications see the Serverless Application Lens whitepaper which defines and explores the pillars of the AWS Well Architected Framework in a Serverless context • Operational Excellence Pillar – The ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures ArchivedAmazon Web Services Security Overview of AWS Lambda Page 14 • Security Pillar – The ability to protect information systems and assets while delivering business value through risk assessment and mitigation strategies • Reliability Pillar – The ability of a system to recover from infrastructure or service disruptions dynamically ac quire computing resources to meet demand and mitigate disruptions such as misconfigurations or transient network issues • Performance Efficiency Pillar – The efficient use of computing resources to meet requirements and the maintenance of that efficiency a s demand changes and technologies evolve The Serverless Application Lens whitepaper includes topics such as logging metrics and alarming throttling and limits assigning permissions to Lambda functions and making sensitive data available to Lambda functions Lambda and Compliance As mentioned in The Shared Responsibility Model section of this document you are responsible for determining which compliance regime applies to your data After you have determined your compliance regime needs you can use the various Lambda features to match those controls You can contact AWS expert s (such as solution architects domain experts technical account managers and other human resources) for assistance However AWS cannot advise customers on whether (or which) compliance regimes are applicable to a particular use case As of November 202 0 Lambda is in scope for SOC 1 SOC 2 and SOC 3 reports which are independent third party examination reports that demonstrate how AWS achieves key compliance controls and objectives In addition Lambda maintains compliance with PCI DSS and the US He alth Insurance Portability and Accountability Act (HIPAA) among other compliance programs For an up todate list of compliance information see the AWS Services in Scope by Compliance P rogram page Because of the sensitive nature of some compliance reports they cannot be shared publicly For access to these reports you can sign in to your AWS console and use AWS Artifact a no cost self service portal for on demand access to AWS compliance reports Lambda Event Sources Lambda integrates with more than 140 AWS services via direct integration and the Amazon EventBridge event bus The commonly used Lambda event sources are: • Amazon API Gateway • Amazon CloudWatch Events ArchivedAmazon Web Services Security Overview of AWS Lambda Page 15 • Amazon CloudWatch Logs • Amazon Dy namoDB Streams • Amazon EventBridge • Amazon Kinesis Data Streams • Amazon S3 • Amazon SNS • Amazon SQS • AWS Step Functions With these event sources you can: • Use AWS IAM to manage access to the service and resources securely • Encrypt your data at rest1 All services encrypt data in transit • Access from your Amazon VPC using VPC endpoints (powered by AWS PrivateLink ) • Use Amazon CloudWatch to collect report and alarm on metrics • Use AWS CloudTrail to log continuously monitor and retain account activity related to actions across your AWS infrastructure providing a comple te event history of actions taken through the AWS Management Console AWS SDKs command line tools and other AWS services Conclusion AWS Lambda offers a powerf ul toolkit for building secure and scalable applications Many of the best practices for security and compliance in Lambda are the same as in all AWS services but some are particular to Lambda This whitepaper describes the benefits of Lambda its suitabi lity for applications and the Lambda managed runtime environment It also includes information about monitoring and auditing and security and compliance best practices As you think about your next implementation consider what you learned about Lambda and how it might improve your next workload solution Contributors Contributors to this document include: • Mayank Thakkar Senior Solutions Architect ArchivedAmazon Web Services Security Overview of AWS Lambda Page 16 • Marc Brooker Senior Principal Engineer • Osman Surkatty Senior Security Engineer Further Reading For additional information see: • Shared Responsibility Model which explains how AWS thinks about security in general • Security best practices in IAM which covers recommendations for AWS Identity and Access Management (IAM) service • Serverless App lication Lens covers the AWS well architected framework and identifies key elements to help ensure your workloads are architected according to best practices • Introduct ion to AWS Security provides a broad introduction to thinking about security in AWS • Amazon Web Services: Risk and Compliance provides an overview of com pliance in AWS Document Revisions Date Description March 2019 First publication January 2021 Republished with significant updates Notes 1 At the time of publishing encryption of data at rest was not available for Amazon EventBridge Continue to monitor the service homepages for updates on these capabilities Archived
General
Considerations_for_Using_AWS_Products_in_GxP_Systems
GxP Systems on AWS Published March 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 About AWS 1 AWS Healthcare and Life Sciences 2 AWS Services 2 AWS Cloud Security 4 Shared Security Responsibility Model 6 AWS Certifications and Attestations 8 Infrastructure Description and Controls 13 AWS Quality Management System 17 Quality Infrastructure and Support Processes 18 Software Development 25 AWS Products in GxP Systems 30 Qualif ication Strategy for Life Science Organizations 32 Supplier Assessment and Cloud Management 38 Cloud Platform/Landing Zone Qualification 42 Qualifying Building Blocks 48 Computer Systems Validation (CSV) 54 Conclusion 55 Contributors 55 Further Reading 55 Document Revisions 56 Appendix: 21 CFR 11 Controls – Shared Responsibility for use with AWS services 57 Abstract This whitepaper provides information on how AWS approaches GxP related compliance and security and provides customers guidance on using AWS Produ cts in the context of GxP The content has been developed based on experience with and feedback from AWS pharmaceutical and medical device customers as well as software partners who are currently using AWS Products in their validated GxP systems Amazon Web Services GxP Systems on AWS 1 Introduction According to a recent publication by Deloitte on the outlook of Global Life Sciences in 2020 prioritization of cloud technologies in the life sciences sector has steadily increased as customers seek out highly reliable scalable and secure solutions to operate their regulated IT systems Amazon Web Services (AWS ) provides cloud services designed to help customers run their most sensitive workloads in the cloud including the computerized systems that support Good Manufacturing Practice Good Laboratory Practice and Good Clinical Practice (GxP) GxP guidelines are established by the US Fo od and Drug Administration (FDA) and exist to ensure safe development and manufacturing of medical devices pharmaceuticals biologics and other food and medical product industries The first section of this whitepaper outlines the AWS services and organi zational approach to security along with compliance that support GxP requirements as part of the Shared Responsibility Model and as it relates to the AWS Quality System for Information Security Management After establishing this information the whitepap er provides information to assist you in using AWS services to implement GxP compliant environments Many customers already leverage industry guidance to influence their regulatory interpretation of GxP requirements Therefore the primary industry guidan ce used to form the basis of this whitepaper is the GAMP (Good Automated Manufacturing Practice) guidance from ISPE (International Society for Pharmaceutical Engineering) in effect as a type of Good Cloud Computing Practice While the following content provides information on use of AWS services in GxP environments you should ultimately consult with your own counsel to ensure that your GxP policies and procedures satisfy regulatory compliance requirements Whitepapers containing more specific information about AWS products privacy and data protection considerations are available at https://awsamazoncom/compliance/ About AWS In 2006 Amazon Web Services (AWS) began offering on demand IT infrastructure services to businesses in the form of web services with pay asyougo pricing Today AWS provides a highly reliable scalable low cost infrastructure platform in the cloud that powers hundreds of thousands of businesses in countries around the world Using AWS businesses no longer need to plan for and procure servers and other IT Amazon Web Services GxP Systems on AWS 2 infrastructure weeks or months in advance Instead they can instantly spin up hundreds or thousands of servers in minutes and deliver results faster Offering ov er 175 fully featured services from data centers globally AWS gives you the ability to take advantage of a broad set of global cloud based products including compute storage databases networking security analytics mobile developer tools management tools IoT and enterprise applications AWS's rapid pace of innovation allows you to focus in on what's most important to you and your end users without the undifferentiated heavy lifting AWS Healthcare and Life Sciences AWS started its dedicated Genomi cs and Life Sciences Practice in 201 4 in response to the growing demand for an experienced and reliable life sciences cloud industry leader Today the AWS Life Sciences Practice team consists of members that have been in the industry on average for over 1 7 years and had previous titles such as Chief Medical Officer Chief Digital Officer Physician Radiologist and Researcher among many others The AWS Genomics and Life Sciences practice serves a large ecosystem of life sciences customers including pharm aceutical biotechnology medical device genomics start ups university and government institutions as well as healthcare payers and providers A full list of customer case studies can be found at https://awsamazoncom/health/customer stories In addition to the resources available within the Genomics and Life Science practice at AWS you can also work with AWS Life Sciences Competency Partners to drive innovation and improve efficiency acr oss the life sciences value chain including cost effective storage and compute capabilities advanced analytics and patient personalization mechanisms AWS Life Sciences Competency Partners have demonstrated technical expertise and customer success in building Life Science solutions on AWS A full list of AWS Life Sciences Competency Partners can be found at https://awsamazoncom/health/lifesciences partner solutions AWS Serv ices Amazon Web Services (AWS) delivers a scalable cloud computing platform with high availability and dependability providing the tools that enable you to run a wide range of applications Helping to protect the confidentiality integrity and availabili ty of our customers’ systems and data is of the utmost importance to AWS as is maintaining customer trust and confidence Amazon Web Services GxP Systems on AWS 3 Similar to other general purpose IT products such as operating systems and database engines AWS offers commercial off theshelf (CO TS) IT services according to IT quality and security standards such as ISO NIST SOC and many others For purposes of this paper w e will use the definition of COTS in accordance with the definition established by FedRAMP a United States government wide program for procurement and security assessment FedRAMP references the US Federal Acquisition Regulation (FAR) for its definition of COTS which outlines COTS items as: • Products or services that are offered and sold competitively in substantial quantities in the commercial marketplace based on an established catalog • Offered without modification or customization • Offered under standard commercial terms and conditions Under GAMP guidelines (such as GAMP 5: A Risk Based Approach to Compliant GxP Computerized Systems) organizations implementing GxP compliant environments will need to categorize AWS services using respective GAMP software and hardware categories (eg Software Category 1 for Infrastructure Software including operating systems dat abase managers and security software or Category 5 for custom or bespoke software) Most often organizations utilizing AWS services for validated applications will categorize them under Software Category 1 AWS offers products falling into several categor ies Below is a subset of those AWS offerings spanning Compute Storage Database Networking & Content Delivery and Security and Compliance A later section of this whitepaper AWS Products in GxP Systems will provide information to assist you in using AWS services to implement your GxPcompliant environments Table 1: Subset of AWS offerings by group Group AWS Products Compute Amazon EC2 Amazon EC2 Auto Scaling Amazon Elastic Container Registry Amazon Elastic Container Service Amazon Elastic Kubernetes Service Amazon Lightsail AWS Batch AWS Elastic Beanstalk AWS Fargate AWS Lambda AWS Outposts AWS Serverless Applicati on Repository AWS Wavelength VMware Cloud on AWS Amazon Web Services GxP Systems on AWS 4 Group AWS Products Storage Amazon Simple Storage Service ( Amazon S3) Amazon Elastic Block Store ( Amazon EBS) Amazon Elastic File System ( Amazon EFS) Amazon FSx for Lustre Amazon FSx for Windows File Server Amazon S3 Gl acier AWS Backup AWS Snow Family AWS Storage Gateway CloudEndure Disaster Recovery Database Amazon Aurora Amazon DynamoDB Amazon DocumentDB Amazon ElastiCache Amazon Keyspaces Amazon Neptune Amazon Quantum Ledger Database ( Amazon QLDB) Amazon R DS Amazon RDS on VMware Amazon Redshift Amazon Timestream AWS Database Migration Service Networking & Content Delivery Amazon VPC Amazon API Gateway Amazon CloudFront Amazon Route 53 AWS PrivateLink AWS App Mesh AWS Cloud Map AWS Direct Connect AWS Global Accelerator AWS Transit Gateway Elastic Load Balancing Security Identity and Compliance AWS Identity & Access Management (IAM) Amazon Cognito Amazon Detective Amazon GuardDuty Amazon Inspector Amazon Macie AWS Artifact AWS C ertificate Manager AWS CloudHSM AWS Directory Service AWS Firewall Manager AWS Key Management Service AWS Resource Access Manager AWS Secrets Manager AWS Security Hub AWS Shield AWS Single Sign On AWS WAF Details and specifications for the full portfolio of AWS products are available online at https://awsamazoncom/ AWS Cloud Security AWS infrastructure has been architected to be one of the most flexible and secure cloud computing environments available today It is designed to provide an extremely scalable highly reliable platform that enables customers to deploy applications and data quickly and securely This infrastructure is built and managed not only accordin g to security best practices and standards but also with the unique needs of the cloud in mind AWS uses redundant and layered controls continuous validation and testing and a substantial amount of automation to ensure that the underlying infrastructure is monitored and protected 24x7 Amazon Web Services GxP Systems on AWS 5 We have many customer testimonials that highlight the security benefits of using the AWS cloud in that the security capabilities provided by AWS far exceed the customer’s own on premises capabilities “We had heard urban legends about ‘security issues in the cloud’ but the more we looked into AWS the more it was obvious to us that AWS is a secure environment and we would be able to use it with peace of mind” Yoshihiro Moriya Certified Information System Auditor at Ho ya “There was no way we could achieve the security certification levels that AWS has We have great confidence in the logical separation of customers in the AWS Cloud particularly through Amazon VPC which allows us to customize our virtual networking environment to meet our specific requirements” Michael Lockhart IT Infrastructure Manager at GPT “When you’re in telehealth and you touch protected health information security is paramount AWS is absolutely critical to do what we do today Security and compliance are table stakes If you don’t have those the rest doesn’t matter" Cory Costley Chief Product Officer Avizia Many more customer testimonials including those from health and life science companies can be found here: https://awsamazoncom/compliance/testimonials/ IT Security is often not the core business of our customers IT departments operate on limited budgets and do a good job of securing their data cente rs and software given limited resources In the case of AWS security is foundational to our core business and so significant resources are applied to ensuring the security of the cloud and helping our customers ensure security in the cloud as described f urther below Amazon Web Services GxP Systems on AWS 6 Shared Security Responsibility Model Security and Compliance is a shared responsibility between AWS and the customer This shared model can help relieve your operational burden as AWS operates manages and controls the components from the hos t operating system and virtualization layer down to the physical security of the facilities in which the service operates Customers assume responsibility and management of the guest operating system (including updates and security patches) other associat ed application software as well as the configuration of the AWS provided security group firewall You should carefully consider the services you choose as your responsibilities vary depending on the services used the integration of those services into your IT environment and applicable laws and regulations The following figure provides an overview of the shared responsibility model This differentiation of responsibility is c ommonly referred to as Security “of” the Cloud versus Security “in” the Cloud which will be explained in more detail below Figure 1: AWS Shared Responsibility Model AWS is responsible for the security and compliance of the Cloud the infrastructure that runs all of the services offered in the AWS Cloud Cloud security at AWS is the highest priority AWS customers benefit from a data center and network architecture tha t are built to meet the requirements of the most security sensitive organizations This Amazon Web Services GxP Systems on AWS 7 infrastructure consists of the hardware software networking and facilities that run AWS Cloud services Customers are responsible for the security and compliance in the Cloud which consists of customer configured systems and services provisioned on AWS Responsibility within the AWS Cloud is determined by the AWS Cloud services that you select and ultimatel y the amount of configuration work you must perform as part of your security responsibilities For example a service such as Amazon Elastic Compute Cloud (Amazon EC2) is categorized as Infrastructure as a Service (IaaS) and as such requires you to perfo rm all of the necessary security configuration and management tasks Customers that deploy an Amazon EC2 instance are responsible for management of the guest operating system (including updates and security patches) any application software or utilities i nstalled by you on the instances and the configuration of the AWS provided firewall (called a security group) on each instance For abstracted services such as Amazon S3 and Amazon DynamoDB AWS operates the infrastructure layer the operating system an d platforms and customers access the endpoints to store and retrieve data You are responsible for managing your data and component configuration (including encryption options) classifying your assets and using IAM tools to apply the appropriate permiss ions The AWS Shared Security Responsibility model also extends to IT controls Just as the responsibility to operate the IT environment is shared between you and AWS so is the management operation and verification of IT controls shared AWS can help rel ieve your burden of operating controls by managing those controls associated with the physical infrastructure deployed in the AWS environment that may previously have been managed by you As every customer is deployed differently in AWS you can take advan tage of shifting management of certain IT controls to AWS which results in a (new) distributed control environment You can then use the AWS control and compliance documentation available to you as well as techniques discussed later in this whitepaper to perform your control evaluation and verification procedures as required Below are examples of controls that are managed by AWS AWS Customers and/or both Inherited Controls – Controls which you fully inherit from AWS • Physical and Environmental controls Shared Controls – Controls which apply to both the infrastructure layer and customer layers but in completely separate contexts or perspectives In a shared control AWS provides the requirements for the infrastructure and you must provide your own contr ol implementation within your use of AWS services Examples include: Amazon Web Services GxP Systems on AWS 8 • Patch Management – AWS is responsible for patching and fixing flaws within the infrastructure but you are responsible for patching your guest OS and applications • Configuration Management – AWS maintains the configuration of its infrastructure devices but you are responsible for configuring your own guest operating systems databases and applications • Awareness & Training AWS trains AWS employees but you must t rain your own employees Customer Specific – Controls which are ultimately your responsibility based on the application you are deploying within AWS services Examples include: • Data Management – for instance placement of data on Amazon S3 where you activa te encryption While certain controls are customer specific AWS strives to provide you with the tools and resources to make implementation easier For further information about AWS physical and operational security processes for the network and server in frastructure under the management of AWS see: AWS Cloud Security site For customers who are designing the security infrastructure and configuration for applications running in Amazon Web Services (AWS) see the Best Practices for Security Identity & Compliance AWS Certifications and Attestations The AWS global infrastructure is designed and managed according to security best practices as well as a variety of security compliance standards With AWS you can be assured that you are building web architectures on top of some of the most secure computing infrastructure in the world The IT infrastructure that AWS provides to you is designed and managed in alignment with security best practices and a variety of IT security standards including the following that life science customers may find most relevant : • SOC 1 2 3 • ISO 9001 / ISO 27001 / ISO 27017 / ISO 27018 • HITRUST • FedRAMP Amazon Web Services GxP Systems on AWS 9 • CSA Security Trust & Assurance Registry (STAR) There are no specific certifications for GxP comp liance for cloud services to date however the controls and guidance described by this whitepaper in conjunction with additional resources supplied by AWS provide information on AWS service GxP compatibility which will assist you in designing and buildin g your own GxP compliant solutions AWS provides on demand access to security and compliance reports and select online agreements through AWS Artifact with reports accessible via AWS customer accounts unde r NDA AWS Artifact is a go to central resource for compliance related information and is a place that you can go to find additional information on the AWS compliance programs described further below SOC 1 2 3 AWS System and Organization Controls (SOC) Reports are independent third party examination reports that demonstrate how AWS achieves key compliance controls and objectives The purpose of these reports is to help you and your auditors understand the AWS controls established to support operations a nd compliance The SOC 1 reports are designed to focus on controls at a service organization that are likely to be relevant to an audit of a user entity’s financial statements The AWS SOC 1 report is designed to cover specific key controls likely to be required during a financial audit as well as covering a broad range of IT general controls to accommodate a wide range of usage and audit scenarios The AWS SOC1 control objectives include security organization employee user access logical security sec ure data handling physical security and environmental protection change management data integrity availability and redundancy and incident handling The SOC 2 report is an attestation report that expands the evaluation of controls to the criteria set f orth by the American Institute of Certified Public Accountants (AICPA) Trust Services Principles These principles define leading practice controls relevant to security availability processing integrity confidentiality and privacy applicable to service organizations such as AWS The AWS SOC 2 is an evaluation of the design and operating effectiveness of controls that meet the criteria for the security and availability principles set forth in the AICPA’s Trust Services Principles criteria This report pr ovides additional transparency into AWS security and availability based on a pre defined industry standard of leading practices and further demonstrates the commitment of AWS to protecting customer data The SOC2 report information includes outlining AWS controls a description of AWS Services relevant to security availability and Amazon Web Services GxP Systems on AWS 10 confidentiality as well as test results against controls You will likely find the SOC 2 report to be the most detailed and r elevant SOC report as it relates to GxP compliance AWS publishes a Service Organization Controls 3 (SOC 3) report The SOC 3 report is a publicly available summary of the AWS SOC 2 report The report includes the external auditor’s assessment of the opera tion of controls (based on the AICPA’s Security Trust Principles included in the SOC 2 report) the assertion from AWS management regarding the effectiveness of controls and an overview of AWS Infrastructure and Services FedRAMP The Federal Risk and Aut horization Management Program (FedRAMP) is a US government wide program that delivers a standard approach to the security assessment authorization and continuous monitoring for cloud products and services FedRAMP uses the NIST Special Publication 800 se ries and requires cloud service providers to receive an independent security assessment conducted by a third party assessment organization (3PAO) to ensure that authorizations are compliant with the Federal Information Security Management Act (FISMA) For AWS Services in Scope for FedRAMP assessment and authorization see https://awsamazoncom/compliance/services inscope/ ISO 9001 ISO 9001:2015 outlines a process oriented approach to d ocumenting and reviewing the structure responsibilities and procedures required to achieve effective quality management within an organization Specific sections of the standard contain information on topics such as: • Requirements for a quality management system (QMS) including documentation of a quality manual document control and determining process interactions • Responsibilities of management • Management of resources including human resources and an organization’s work environment • Service development including the steps from design to delivery • Customer satisfaction • Measurement analysis and improvement of the QMS through activities like internal audits and corrective and preventive actions Amazon Web Services GxP Systems on AWS 11 The AWS ISO 9001:2015 certification directly supports custome rs who develop migrate and operate their quality controlled IT systems in the AWS cloud You can leverage AWS compliance reports as evidence for your own ISO 9001:2015 programs and industry specific quality programs such as GxP in life sciences and ISO 1 31485 in medical devices ISO/IEC 27001 ISO/IEC 27001:2013 is a widely adopted global security standard that sets out requirements and best practices for a systematic approach to managing company and customer information that’s based on periodic risk asses sments appropriate to ever changing threat scenarios In order to achieve the certification a company must show it has a systematic and ongoing approach to managing information security risks that affect the confidentiality integrity and availability of company and customer information This widely recognized international security standard specifies that AWS do the following: • We s ystematically evaluate AWS information security risks taking into account the impact of threats and vulnerabilities • We d esign and implement a comprehensive suite of information security controls and other forms of risk management to address customer and architecture security risks • We have an overarching management process to ensure that the information security controls meet our needs on an ongoing basis AWS has achieved ISO 27001 certification of the Information Security Management System (ISMS) covering AWS infrastructure data centers and services ISO/IEC 27017 ISO/IEC 27017:2015 provides guidance on the information security aspects of cloud computing recommending the implementation of cloud specific information security controls that supplement the guidance of the ISO/IEC 27002 and ISO/IEC 27001 standards This code of practice provides additional inform ation security controls implementation guidance specific to cloud service providers The AWS attestation to the ISO/IEC 27017:2015 standard not only demonstrates an ongoing commitment to align with globally recognized best practices but also verifies that AWS has a system of highly precise controls in place that are specific to cloud services Amazon Web Services GxP Systems on AWS 12 ISO/IEC 27018 ISO 27018 is the first International code of practice that focuses on protection of personal data in the cloud It is based on ISO information security standard 27002 and provides implementation guidance on ISO 27002 controls applicable to public cloud Personally Identifiable Information (PII) It also provides a set of additional controls and associated guidance intended to address public cloud PII prot ection requirements not addressed by the existing ISO 27002 control set AWS has achieved ISO 27018 certification an internationally recognized code of practice which demonstrates the commitment of AWS to the privacy and protection of your content HITRU ST The Health Information Trust Alliance Common Security Framework (HITRUST CSF) leverages nationally and internationally accepted standards and regulations such as GDPR ISO NIST PCI and HIPAA to create a comprehensive set of baseline security and priv acy controls HITRUST has developed the HITRUST CSF Assurance Program which incorporates the common requirements methodology and tools that enable an organization and its business partners to take a consistent and incremental approach to managing compli ance Further it allows business partners and vendors to assess and report against multiple sets of requirements Certain AWS services have been assessed under the HITRUST CSF Assurance Program by an approved HITRUST CSF Assessor as meeting the HITRUST CS F Certification Criteria The certification is valid for two years describes the AWS services that have been validated and can be accessed at https://awsamazoncom/compliance/hitrust/ You may l ook to leverage the AWS HITRUST CSF certification of AWS services to support your own HITRUST CSF certification in complement to your GxP compliance programs CSA Security Trust & Assurance Registry (STAR) In 2011 the Cloud Security Alliance (CSA) launched STAR an initiative to encourage transparency of security practices within cloud providers The CSA Security Trust & Assur ance Registry (STAR) is a free publicly accessible registry that documents the security controls provided by various cloud computing offerings thereby helping users assess the security of cloud providers they currently use or are considering Amazon Web Services GxP Systems on AWS 13 AWS partic ipates in the voluntary CSA Security Trust & Assurance Registry (STAR) SelfAssessment to document AWS compliance with CSA published best practices AWS publish es the completed CSA Consensus Assessments Initiative Questionnaire (CAIQ) on the AWS website Infrastructure Description and Controls Cloud Models (Nature of the Cloud) Cloud computing is the on demand delivery of compute power da tabase storage applications and other IT resources through a cloud services platform via the Internet with pay asyougo pricing As cloud computing has grown in popularity several different models and deployment strategies have emerged to help meet spe cific needs of different users Each type of cloud service and deployment method provides you with different levels of control flexibility and management Cloud Computing Models Infrastructure as a Service (IaaS) Infrastructure as a Service (IaaS) contai ns the basic building blocks for cloud IT and typically provides access to networking features computers (virtual or on dedicated hardware) and data storage space IaaS provides you with the highest level of flexibility and management control over your IT resources and is most similar to existing IT resources that many IT departments and developers are familiar with today (eg Amazon Elastic Compute Cloud (Amazon EC2)) Platform as a Service (PaaS) Platform as a Service (PaaS) removes the need for organi zations to manage the underlying infrastructure (usually hardware and operating systems) and allows you to focus on the deployment and management of your applications (eg AWS Elastic Beanstalk) This helps you be more efficient as you don’t need to worry about resource procurement capacity planning software maintenance patching or any of the other undifferentiated heavy lifting involved in running your application Software as a Service (SaaS) Software as a Service (SaaS) provides you with a completed product that is run and managed by the service provider In most cases people referring to Software as a Service are referring to end user applications (eg Amazon Connect) With a SaaS offering you do not have to think about how the se rvice is maintained or how the Amazon Web Services GxP Systems on AWS 14 underlying infrastructure is managed; you only need to think about how you will use that particular piece of software A common example of a SaaS application is web based email which can be used to send and receive email with out having to manage feature additions to the email product or maintain the servers and operating systems on which the email program is running Cloud Computing Deployment Models Cloud A cloud based application is fully deployed in the cloud and all parts of the application run in the cloud Applications in the cloud have either been created in the cloud or have been migrated from an existing infrastructure to take advantage of the benefits of cloud computing ( https://awsamazoncom/what iscloud computing/ ) Cloud based applications can be built on low level infrastructure pieces or can use higher level services that provide abstraction from the management architecting and scaling requi rements of core infrastructure Hybrid A hybrid deployment is a way to connect infrastructure and applications between cloud based resources and existing resources that are not located in the cloud The most common method of hybrid deployment is between t he cloud and existing on premises infrastructure to extend and grow an organization's infrastructure into the cloud while connecting cloud resources to the internal system For more information on how AWS can help you with hybrid deployment visit the AW S hybrid page (https://awsamazoncom/hybrid/ ) Onpremises The deployment of resources on premises using virtualization and resource management tools is sometimes sought for its ability to provide dedicated resources (https://awsamazoncom/hybrid/ ) In most cases this deployment model is the same as legacy IT infrastructure while using application management and virtualization technologies to try and increase r esource utilization Security Physical Security Amazon has many years of experience in designing constructing and operating large scale data centers This experience has been applied to the AWS platform and infrastructure AWS data centers are housed in facilities that are not branded as AWS Amazon Web Services GxP Systems on AWS 15 facilities Physical access is strictly controlled both at the perimeter and at building ingress points by professional security staff utilizing video surveillance intrusion detection systems and other electronic me ans Authorized staff must pass two factor authentication a minimum of two times to access data center floors All visitors are required to present identification and are signed in and continually escorted by authorized staff AWS only provides data center access and information to employees and contractors who have a legitimate business need for such privileges When an employee no longer has a business need for these privileges his or her access is immediately revoked even if they continue to be an empl oyee of Amazon or Amazon Web Services All physical access to data centers by AWS employees is logged and audited routinely Additional information on infrastructure security may be found on the webpage on AWS Data Center controls Single or Multi Tenant Environments As cloud technology has rapidly evolved over the past decade one fundamental technique used to maximize physical resources as well as lower customer costs has been to offer multi tenant services to cloud customers To facilitate this architecture AWS has developed and implemented powerful and flexible logical security controls to create strong isolation boundaries between customers Security is job zero at AWS and you will find a rich history of AWS steadily enhancing its features and controls to help customers achieve their security posture requirements such as GxP Coming from operating an on premises environment you will often find that CSPs like AWS enable you to effectively optimize your security configurations in the cloud compared to your onpremises solutions The AWS logical security capabilities as well as security controls in place address the concerns driving physical separation to protect your data The pro vided isolation combined with the automation and flexibility added offers a security posture that matches or bests the security controls seen in traditional physically separated environments Additional detailed information on logical separation on AWS ma y be found in the Logical Separation on AWS whitepaper Amazon Web Services GxP Systems on AWS 16 Cloud Infrastructure Qualification Activities Geography AWS serves over a million active customers i n more than 200 countries As customers grow their businesses AWS will continue to provide infrastructure that meets their global requirements The AWS Cloud infrastructure is built around AWS Regions and Availability Zones An AWS Region is a physical l ocation in the world which has multiple Availability Zones Availability Zones consist of one or more discrete data centers each with redundant power networking and connectivity housed in separate facilities These Availability Zones offer you the abil ity to operate production applications and databases that are more highly available fault tolerant and scalable than would be possible from a single data center The AWS Cloud operates in over 70 Availability Zones within over 20 geographic Regions aroun d the world with announced plans for more Availability Zones and Regions For more information on the AWS Cloud Availability Zones and AWS Regions see AWS Global Infrastructure Each Amazon Region is designed to be completely isolated from the other Amazon Regions This achieves the greatest possible fault tolerance and stability Each Availability Zone is isolated but the Availability Zones in a Region are connected through low laten cy links AWS provides customers with the flexibility to place instances and store data within multiple geographic regions as well as across multiple Availability Zones within each AWS Region Each Availability Zone is designed as an independent failure zo ne This means that Availability Zones are physically separated within a typical metropolitan region and are located in lower risk flood plains (specific flood zone categorization varies by AWS Region) In addition to discrete uninterruptable power supply (UPS) and onsite backup generation facilities they are each fed via different grids from independent utilities to further reduce single points of failure Availability Zones are all redundantly connected to multiple tier 1 transit providers Data Locatio ns Where geographic limitations apply unlike other cloud providers who often define a region as a single data center the multiple Availability Zone (AZ) design of every AWS Region offers you advantages If you are focused on high availability you can desi gn your applications to run in multiple AZ's to achieve even greater fault tolerance AWS infrastructure Regions meet the highest levels of security compliance and data protection If you have data residency requirements you can choose the AWS Region that is in close proximity to your desired location You retain complete control and Amazon Web Services GxP Systems on AWS 17 ownership over the region in which your data is physically located making it easy to meet regional compliance and data residency requirements In addition for moving on premises data to AWS for migrations or ongoing workflows the following AWS website on Cloud Data Migration descr ibes the various tools and services that you may use to ensure data onshoring compliance including: • Hybrid cloud storage (AWS Storage Gateway AWS Direct Connect) • Online data transfer (AWS DataSync AWS Transfer Family Amazon S3 Transfer Acceleration AWS Snowcone Amazon Kinesis Data Firehose APN Partner Products) • Offline data transfer (AWS Snowcone AWS Snowball AWS Snowmobile) Capacity When it comes to capac ity planning AWS examines capacity at both a service and rack usage level The AWS capacity planning process also automatically triggers the procurement process for approval so that AWS doesn’t have additional lag time to account for and AWS relies on ca pacity planning models which are informed in part by customer demand to trigger new data center builds AWS enables you to reserve instances so that space is guaranteed in the region(s) of your choice AWS uses the number of reserved instances to inform planning for FOOB (future out of bound) Uptime AWS maintains SLAs (Service Level Agreements) for various services across the platform which at the time of this writing includes a guaranteed monthly uptime percentage of at least 9999% for Amazon EC2 an d Amazon EBS within a Region A full list of AWS SLAs can be found at https://awsamazoncom/legal/service level agreements/ In addition Amazon Web Services publishes the most up totheminute information on service availability in the AWS Service Health Dashboard (https://statusawsamazoncom/ ) It is important to note that as part of the shared security responsibility model it is your responsibility to architect your application for resilience based on your organization’s requirements AWS Quality Management System Life Science customers with obligations under GxP requirements need to ensure that quality is part of manufacturing and controls during the design development and deployment of their GxP regulated product This quality assurance includes an Amazon Web Services GxP Systems on AWS 18 appropriate assessment of cloud service suppliers like AWS to meet the obligations of your quality system For a deeper description of the AWS Quality Management System you may use AWS Artifact to access additional documents under NDA Below AWS provide s information on some of the concepts and components of the AWS Q uality System of most interest to GxP customers like you Quality Infrastructure and Support Processes Quality Management System Certification AWS has undergone a systematic independent examination of our quality system to determine whether the activitie s and activity outputs comply with ISO 9001:2015 requirements A certifying agent found our quality management system ( QMS ) to comply with the requirements of ISO 9001:2015 for the activities described in the scope of registration The AWS quality manageme nt system has been certified to ISO 9001 since 2014 The reports cover six month periods each year (April September / October March) New reports are released in mid May and mid November To see the AWS ISO 9001 registration certification certification bo dy information as well as date of issuance and renewal please see the information on the ISO 9001 AWS compliance program website: https://awsamazoncom/compliance/iso 9001 faqs/ The certi fication covers the QMS over a specified scope of AWS services and Regions of operations If you are pursuing ISO 9001:2015 certification while operating all or part of your IT systems in the AWS cloud you are not automatically certified by association however using an ISO 9001:2015 certified provider like AWS can make your certification process easier AWS provides additional detailed information on the quality management system accessible within AWS Artifact via customer accounts in the AWS console (https://awsamazoncom/artifact/ ) Software Development Approach AWS’s strategy for design and development of AWS services is to clearly define services in terms of customer use cases service performance marketing and distribution requirements production and testing and legal and regulatory requirements The design of all new services or any significant changes to current services are controlled through a project management system with multi disciplinary Amazon Web Services GxP Systems on AWS 19 participation Requirements and service specifications are established during service development taking into account legal and regulatory requirements customer contractual commitments and requirements to meet the confidentiality integrity and availability of the service in alignment with the quality objectives established within the quality management system Service reviews are completed as part of the developm ent process and these reviews include evaluation of security legal and regulatory impacts and customer contractual commitments Prior to launch each of the following requirements must be complete: • Security Risk Assessment • Threat modeling • Security des ign reviews • Secure code reviews • Security testing • Vulnerability/penetration testing AWS implements open source software or custom code within its services All open source software to include binary or machine executable code from third parties is reviewed and approved by the Open Source Group prior to implementation and has source code that is publicly accessible AWS service teams are prohibited from implementing code from third parties unless it has been approved through the open source review All code developed by AWS is available for review by the applicable service team as well as AWS Security By its nature open source code is available for review by the Open Source Group prior to granting authorization for use within Amazon Quality Proc edures In addition to the software hardware human resource and real estate assets that are encompassed in the scope of the AWS quality management system supporting the development and operations of AWS services it also includes documented information including but not limited to source code system documentation and operational policies and procedures AWS implements formal documented policies and procedures that provide guidance for operations and information security within the organization and the supporting AWS environments Policies address purpose scope roles responsibilities and management Amazon Web Services GxP Systems on AWS 20 commitment All policies are maintained in a centralized location that is accessible by employees Project Management Processes The design of new service s or any significant changes to current services follow secure software development practices and are controlled through a project management system with multi disciplinary participation Quality Organization Roles AWS Security Assurance is responsible for familiarizing employees with the AWS security policies AWS has established information security functions that are aligned with defined structure reporting lines and responsibilities Leadership involvement provides clear direction and visible support for security initiatives AWS has established a formal audit program that includes continual independent internal and external assessments to validate the implementation and operating effectiveness of the AWS control environment AWS maintains a documen ted audit schedule of internal and external assessments The needs and expectations of internal and external parties are considered throughout the development implementation and auditing of the AWS control environment Parties include but are not limite d to: • AWS customers including current customers and potential customers • External parties to AWS including regulatory bodies such as the external auditors and certifying agents • Internal parties such as AWS services and infrastructure teams security and overarching administrative and corporate teams Quality Project Planning and Reporting The AWS planning process defines service requirements requirements for projects and contracts and ensures customer needs and expectations are met or exceeded Planning is achieved through a combination of business and service planning project teams quality improvement plans review of service related metrics and documentation selfassessments and supplier audits and employee training The AWS quality system is documented to ensure that planning is consistent with all other requirements AWS continuously monitors service usage to project infrastructure needs to support availability commitments and requirements AWS maintains a capacity planning model Amazon Web Services GxP Systems on AWS 21 to assess infrastructure usage and demands at least monthly and usually more frequently In addition the AWS capacity planning model supports the planning of future demands to acquire and implement additional resources based upon current resources and forecasted requirements Electronics Records and Electronic Signatures In the United States (US) GxP regulations are enforced by the US Food and Drug Administration (FDA) and are contained in Title 21 of the Code of Federal Regulations (21 CFR) Within 21 CFR Part 11 contains the requirements for computer systems that create modify maintain archive retrieve or distribute electronic records and electronic signatures in support of GxP regulated activities (and in the EU EudraLex Volume 4 Good Manufacturing Practice (GMP) guidelines – Annex 11 Computerised Systems) Part 11 was created to permit the adoption of new information technologies by FDA regulated life sciences organizations while simultaneously providing a framework to ensure that the electronic GxP data is trustworthy and reliable There is no GxP certification for a commercial cloud provider such as AWS AWS offers commercial off theshelf (COTS) IT services according to IT quality and security standards such as ISO 27001 ISO 27017 ISO 27018 ISO 9001 NIST 800 53 and many others GxP regulated life sciences customers like you are responsible for purchasing and using AWS services to develop and operate your GxP sys tems and to verify your own GxP compliance and compliance with 21 CFR 11 This document used in conjunction with other AWS resources noted throughout may be used to support your electronic records and electronic signatures requirements A further desc ription of the shared responsibility model as it relates to your use of AWS services in alignment with 21 CFR 11 can be found in the Appendix Company SelfAssessments AWS Security Assurance monitors the implementation and maintenance of the quality management system by performing verification activities through the AWS audit program to ensure compliance suitability and effectiveness of the quality management system The AWS audit program includes selfassessment s third party accreditation audits and supplier audits The objective of these audits are to evaluate the operating effectiveness of the AWS quality management system Selfassessment s are performed periodically Audits by third part ies for accreditation are conducted to review the continued performance of AWS against standards based criteria and to identify general improvement opportunities Supplier audits are performed to assess the supplier’s potential for pro viding services or material that conform to AWS supply requirements Amazon Web Services GxP Systems on AWS 22 AWS maintains a documented schedule of all assessments to ensure implementation and operating effectiveness of the AWS control environment to meet various objectives Contract Reviews AWS offers Services for sale under a standardized customer agreement that has been reviewed to ensure the Services are accurately represented properly promoted and fairly priced Please contact your account team if you have questions about AWS service ter ms Corrective and Preventative Actions AWS takes action to eliminate the cause of nonconformities within the scope of the quality management system in order to prevent recurrence The following procedure is followed when taking corrective and preventiv e actions: 1 Identify the specific nonconformities; 2 Determine the causes of nonconformities; 3 Evaluate the need for actions to ensure that nonconformities do not recur; 4 Determine and implement the corrective action(s) needed; 5 Record results of action(s) taken ; 6 Review of the corrective action(s) taken 7 Determine and implement preventive action needed; 8 Record results of action taken; and 9 Review of preventive action The records of corrective actions may be reviewed during regularly scheduled AWS management meeti ngs Customer Complaints AWS relies on procedures and specific metrics to support you Customer reports and complaints are investigated and where required actions are taken to resolve them You can contact AWS at https://awsamazoncom/contact us/ or speak directly with your account team for support Amazon Web Services GxP Systems on AWS 23 Third Party Management AWS maintains a supplier management team to foster third party relationships and monitor thi rd party performance SLAs and SLOs are implemented to monitor performance AWS creates and maintains written agreements with third parties (for example contractors or vendors) in accordance with the work or service to be provided (for example network s ervices service delivery or information exchange) and implements appropriate relationship management mechanisms in line with their relationship to the business AWS monitors the performance of third parties through periodic reviews using a risk based app roach which evaluate performance against contractual obligations Training Records Personnel at all levels of AWS are experienced and receive training in the skill areas of the jobs and other assigned training Training needs are identified to ensure tha t training is continuously provided and is appropriate for each operation (process) affecting quality Personnel required to work under special conditions or requir ing specialized skills are trained to ensure their competency Records of training and certi fication are maintained to verify that individuals have appropriate training AWS has developed documented and disseminated role based security awareness training for employees responsible for designing developing implementing operating maintaining and monitoring the system affecting security and availability and provides resources necessary for employees to fulfill their responsibilities Training includes but is not limited to the following information (when relevant to the employee’s role): • Workforce conduct standards • Candidate background screening procedures • Clear desk policy and procedures • Social engineering phishing and malware • Data handling and protection • Compliance commitments • Use of AWS security tools • Security precautions while travel ing • How to report security and availability failures incidents concerns and other complaints to appropriate personnel Amazon Web Services GxP Systems on AWS 24 • How to recognize suspicious communications and anomalous behavior in organizational information systems • Practical exercises that reinforce training objectives • HIPAA responsibilities Personnel Records AWS performs periodic formal evaluation s of resourcing and staffing including an assessment of employee qualification alignment with entity objectives Personnel records are managed th rough an internal Amazon System Infrastruc ture Management The Infrastructure team maintains and operates a configuration management framework to address hardware scalability availability auditing and security management By centrally managing hosts thr ough the use of automated processes that manage change Amazon is able to achieve its goals of high availability repeatability scalability security and disaster recovery Systems and network engineers monitor the status of these automated tools on a co ntinuous basis reviewing reports to respond to hosts that fail to obtain or update their configuration and software Internally developed configuration management software is installed when new hardware is provisioned These tools are run on all UNIX host s to validate that they are configured and that software is installed in compliance with standards determined by the role assigned to the host This configuration management software also helps to regularly update packages that are already installed on the host Only approved personnel enabled through the permissions service may log in to the central configuration management servers AWS notifies you of certain changes to the AWS service offerings where appropriate AWS continuously evolves and improves the ir existing services frequently adding new Services or features to existing Services Further as AWS services are controlled using APIs if AWS changes or discontinues any API used to make calls to the Services AWS continues to offer the existing API fo r 12 months (as of this publication) to give you time to adjust accordingly Additionally AWS provides you with a Personal Health Dashboard with service health and status information specific to your account as well as a public Service Health Dashboard t o provide all customers with the real time operational status of AWS services at the regional level at http://statusawsamazoncom Amazon Web Services GxP Systems on AWS 25 Software Development Software Development Processes The Project and Operation stages of the life cycle approach in GAMP for instance are reflected in the AWS information and activities surrounding organizational mechanisms to guide the development and configuration of the information system including software developmen t lifecycles and software change management Elements of the organizational mechanisms include policies and standards the code pathway deployment a change management tool ongoing monitoring security reviews emergency changes management of outsourced and unauthorized development and communication of changes to customers The software development lifecycle activities at AWS include the code development and change management processes at AWS which are centralized across AWS teams developing externally and internally facing code with processes applying to both internal and external service teams Code deployed at AWS is developed and managed in a consistent process regardless of its ultimate destination There are several systems utilized in this proces s including: • A code management system used to assemble a code package as part of development • Internal source code repository • The hosting system in which AWS code pipelines are staged • The tool utilized for automating the testing approval deployment and ongoing monitoring of code • A change management tool which breaks change workflows down into discrete easy to manage steps and tracks change details • A monitoring service to detect unapproved changes to code or configurations in production systems Any variances are escalated to the service owner/team Code Pathway The AWS Code Pathway steps to development and deployment are outlined below This process is executed regardless of whether the code is net new or if it represents a change to a n existing codebase Amazon Web Services GxP Systems on AWS 26 1 Developer writes the code in an approved integrated development environment running on an AWS managed developer desktop environment The developer typically does an initial build and integration test prior to the next step 2 The develop er checks in the code for review to an internal source code repository 3 The code goes through a Code Review Verification in which at least one additional person reviews the code and approves it The list of approvals are stored in an immutable log that is retained within the code review tool 4 The code is then built from source code to the appropriate type of deployable code package (which varies from language to language) in an internal build system 5 After successful build including successful passing of a ll integration tests the code gets pushed to a test environment 6 The code goes through automated integration and verification tests in the pre production environments and upon successful testing the code is pushed to production AWS may implement open sou rce code within its Services but any such use of open source code is still subject to the approval packaging review deployment and monitoring processes described above Open source software including binary or machine executable code and open source licenses is additionally reviewed and approved prior to implementation AWS maintains a list of approved open source as well as open source that is prohibited Deployment and Testing A pipeline represents the path approved code packages take from initia l check in through a series of automated (and potentially manual) steps to execution in production The pipeline is where automation testing and approvals happen At AWS the deployment tool is used to create view and enforce code pipelines This tool is utilized to promote the latest approved revision of built code to the production environment A major factor in ensuring safe code deployment is deploying in controlled stages and requiring continuous approvals prior to pushing code to production As p art of the deployment process pipelines are configured to release to test environments (eg “beta” “gamma” and others as defined by the team) prior to pushing the code to the production environment Automated quality testing (eg integration testing structural Amazon Web Services GxP Systems on AWS 27 testing behavioral testing ) is performed in these environments to ensure code is performing as anticipated If code is found to deviate from standards the release is halted and the team is notified of the need to review These development and test environments emulate the production environment and are used to properly assess and prepare for the impact of a change to the production environment In order to reduce the risks of unauthorized access or change to the production environment the dev elopment test and production environments are all logically separated The tool additionally enforces phased deployment if the code is to be deployed across multiple regions Should a package include deployment for more than one AWS region the pipelin e will enforce deployment on a single region basis If the package were to fail integration tests at any region the pipeline is halted and the team is notified for need to review Configuration and Change Management Configuration management is performed during information system design development implementation and operation through the use of the AWS Change Management process Routine emergency and configuration changes to existing AWS infrastructure are autho rized logged tested approved and documented in accordance with industry norms for similar systems Updates to the AWS infrastructure are done to minimize any impact on you and your use of the services Software AWS applies a systematic approach to managing change so that changes to customer impacting services are thoroughly reviewed tested approved and well communicated The AWS change management process is designed to avoid unintended service disruptions and to maintain th e integrity of service to you Changes deployed into production environments are: • Prepared: this includes scheduling determining resources creating notification lists scoping dependencies minimizing concurrent changes as well as a special process for e mergent or long running changes • Submitted: this includes utilizing a Change Management Tool to document and request the change determine potential impact conduct a code review create a detailed timeline and activity plan and develop a detailed rollback procedure Amazon Web Services GxP Systems on AWS 28 • Reviewed and Approved: Peer reviews of the technical aspects of a change are required Changes must be authorized in order to provide appropriate oversight and understanding of business and security impact The configuration management process includes key organizational personnel that are responsible for reviewing and approving proposed changes to the information system • Tested : Changes being applied are tested to help ensure they will behave as expected and not adversely impact performance • Performed: This includes pre and post change notification managing timeline monitoring service health and metrics and closing out the change AWS service teams maintain a current authoritative baseline configuration for systems and devices Change Manage ment tickets are submitted before changes are deployed (unless it is an emergency change) and include impact analysis security considerations description timeframe and approvals Changes are pushed into production in a phased deployment starting with lo west impact areas Deployments are tested on a single system and closely monitored so impacts can be evaluated Service owners have a number of configurable metrics that measure the health of the service’s upstream dependencies These metrics are closely m onitored with thresholds and alarming in place Rollback procedures are documented in the Change Management (CM) ticket AWS service teams retain older versions of AWS baseline packages and configurations necessary to support rollback and p revious versions are s tored in the repository systems Integration testing and the validation process is performed before rollbacks are implemented When possible changes are scheduled during regular change windows In addition to the preventative controls that are part of the pipeline (eg code review verifications test environments) AWS also uses detective controls configured to alert and notify personnel when a change is detected that may have been made without standard procedure AWS checks deployments to ensure that they have the appropriate reviews and approvals to be applied before the code is committed to production Exceptions for reviews and approvals for production lead to automatic ticketing and notification of the service team After code is depl oyed to the Production environment AWS performs ongoing monitoring of performance through a variety of monitoring processes AWS host configuration settings are also monitored as part of vulnerability monitoring to validate compliance with AWS security st andards Audit trails of the changes are maintained Emergency changes to production systems that require deviations from standard change management procedures are associated with an incident and are logged and Amazon Web Services GxP Systems on AWS 29 approved as appropriate Periodically AWS p erforms self audits of changes to key services to monitor quality maintain high standards and facilitate continuous improvement of the change management process Any exceptions are analyzed to determine the root cause and appropriate actions are taken t o bring the change into compliance or roll back the change if necessary Actions are then taken to address and remediate the process or people issue Reviews AWS performs internal security reviews against Amazon security standards of externally launched pr oducts services and significant feature additions prior to launch to ensure security risks are identified and mitigated before deployment to a customer environment AWS security reviews include evaluating the service’s design threat model and impact to AWS’ risk profile A typical security review starts with a service team initiating a review request to the dedicated team and submitting detailed information about the artifacts being reviewed Based on this information AWS reviews the design and identif ies security considerations; these considerations include but are not limited to: appropriate use of encryption analysis of data handling regulatory considerations and adherence to secure coding practices Hardware firmware and virtualization software also undergo security reviews including a security review of the hardware design actual implementation and final hardware samples Code package changes are subject to the following security activities: • Full security assessment • Threat modeling • Security design reviews • Secure code reviews (manual and automated methods) • Security testing • Vulnerability/penetration testing Success ful completion of the above mentioned activities are pre requisites for Service launch Development teams ar e responsible for the security of the features they develop that meet the security engineering principles Infrastructure teams incorporate security principles into the configuration of servers and network devices with least privilege enforced throughout Findings identified by AWS are categorized in terms of risk and are tracked in an automated workflow tool Amazon Web Services GxP Systems on AWS 30 Product Release For all AWS services information can be found on the associated service website which describes the key attributes of the Servi ce and product details as well as pricing information developer resources (including release notes and developer tools) FAQs blogs presentations and additional documentation such as developer guides API references and use cases where relevant ( https://awsamazoncom/products/ ) Customer Training AWS has implemented various methods of external communication to support its customer base and the community Mechanisms are in place to allow the customer support team to be notified of operational issues that impact your experience A Service Health Dash board is available and maintained by the customer support team to alert you to any issues that may be of broad impact The AWS Cloud Security Center (https://awsamazoncom/security/ ) and Healthcare and Life Sciences Center (https://awsamazoncom/health/ ) is available to provide you with security and compliance details and Life Sciences related enablement information about AWS You can also su bscribe to AWS Support offerings that include direct communication with the customer support team and proactive alerts to any customer impacting issues AWS also has a series of training and certification programs ( https://wwwawstraining/ ) on a number of cloud related topics in addition to a series of service and support offerings available through your AWS account team AWS Products in GxP Systems With limited technical guidance from regulatory and industry bod ies this section aims to describe some of the best practices we’ve seen customers adopting when using cloud services to meet their regulatory compliance needs The Final FDA Guidance Document “ Data Integrity and Compliance With Drug CGMP ” explicitly brings cloud infrastructure into scope through the revised definition of “computer or related systems”: “The American National Standards Institute (ANSI) defines systems as people machines and methods organized to accomplish a set of specific functions Computer or related systems can refer to computer hardware software peripheral devices networks cloud infrastructure personnel and associated documents (eg user manuals and standard operating pr ocedures)“ Amazon Web Services GxP Systems on AWS 31 Further industry organizations like ISPE are increasingly dedicating publications on cloud usage in the life sciences ( Getting Ready For Pharma 40: Data integrity in cloud and big data applications ) As described throughout this whitepaper there is no unique certification for GxP regulations so each customer defines their own risk profile Therefore it is important to note that although this whitepaper i s based on AWS experience with life science customers you must take final accountability and determine your own regulatory obligations To begin with even when deployed in the cloud GxP applications still need to be validated and their underlying infras tructure still needs qualifying The basic principles governing on premise infrastructure qualification still apply to virtualized cloud infrastructure Therefore the current industry guidance should still be leveraged Traditionally a regulated company was accountable and responsible for all aspects of their infrastructure qualification and application validation With the introduction of public cloud providers part of that responsibility has been shifted to a cloud supplier The regulated company is st ill accountable but the cloud supplier is now responsible for the qualification of the physical infrastructure virtualization and service layers and to completely manage the services they provide ie the big difference now is that there is a shared com pliance responsibility model which is similar to the shared security responsibility model described earlier in this whitepaper Previous sections of this whitepaper described how AWS takes care of their part of the shared responsibility model This section provides recommended strategies on how to cover your part of the shared responsibility model for GxP environments Involving AWS Achieving GxP compliance when adopting cloud technology is a journey AWS has helped many customers along this journey and th ere is no compression algorithm for experience For example Core Informatics states: “Using AWS we can help organizations accel erate discovery while maintaining GxP compliance It’s transforming our bu siness and more importantly helping our customers tr ansform their businesses” Richard Duffy Vice President of Engineering Core Informatics Amazon Web Services GxP Systems on AWS 32 For the complete case study see Core Informatics Case Study For a selection of other customer case studies see AWS Custom er Success Industry guidance recommends that companies should try and maximize supplier involvement and leverage our knowledge experience and even our documentation as much as possible as we provide in the following sections and throughout this whitepap er Please contact us to discuss starting your journey to the cloud Qualification Strategy for Life Science Organizations One of the concerns for regulated enterprise customers becomes how to qualify and demonstrate control over a system when so much of the responsibility is now shared with a supplier The purpose of a Qualification Strategy is to answer this question Some customers view a Qualification Strategy as an overarching Validation Plan The str ategy will employ various tactics to address the regulatory needs of the customer To better scope the Qualification Strategy the architecture should be viewed in its entirety Enterprise scale customers typically define the architecture similar to the following: Figure 2: Layered architecture AWS ServicesRegulated Landing ZoneApplicationsCustomer Accountability Customer ResponsibilityAWS Responsibility Building BlocksAmazon Web Services GxP Systems on AWS 33 The diagram il lustrates a layered architecture where a large part is delegated to AWS From this approach a Qualification Strategy can be defined to address four main areas: 1 How to work with AWS as a supplier of services 2 The qualification of the regulated landing zone 3 The qualification of building blocks 4 Supporting the development of GxP applications The situation also changes slightly if the customer leverages a service provider like AWS Managed Services where the build operation and maintenance of the landing zone is done by the service provi der Conversely for workloads that must remain on premises AWS Outposts extends AWS services including compute storage and networking to customer sites Data can be configured to be stored locally and customers are responsible for controlling access around Outposts equipment Data that is processed and stored on premises is accessible over the customer’s local network In this case customer responsibility extends into the AWS Services box ( Figure 3) Figure 3: Layered architecture with service provider In this situation even more responsibility is delegated by the customer and so the controls that are typically to be put in place by the customer to control their own AWS ServicesRegulated Landing ZoneApplicationsCustomer Accountability Customer ResponsibilityAWS Responsibility Building Blocks Service Provider ResponsibilityAmazon Web Services GxP Systems on AWS 34 operations now need adaptations to check that similar controls are implemented by the service provider The controls that are inherited from AWS are shared or that remain with the customer were covered previously in the Shared Security Responsibility Model section of this whitepaper This s ection describe s these layers at a high level These layers are expanded upon in later sections of this whitepaper Industry Guidance The following guidance is at a minimum a best practice for your environment You should still work with your professiona ls to ensure you comply with applicable regulatory requirements The same basic principles that govern on premise s infrastructure qualification also apply to cloud based systems Therefore this strategy uses a tactic of leveraging and building upon that s ame industry guidance using a cloud perspective based on the following ISPE GAMP Good Practice Guides ( Figure 4): • GAMP Good Practice Guide: IT Infrast ructure Control and Compliance 2nd Edition • GAMP 5: A Risk Based Approach to Compliant GxP Computerized Systems Figure 4: Mapping industry guidance to architecture layers AWS ServicesRegulated Landing ZoneApplicationsCustomer Accountability Customer ResponsibilityAWS Responsibility Building BlocksGAMP 5: A RiskBased Approach to Compliant GxP Computerized Systems GAMP Good Practice Guide: IT Infrastructure Control and Compliance 2nd Edition Amazon Web Services GxP Systems on AWS 35 Supplier Assessment and Management Industry guidance suggest s you l everage a supplier ’s experience knowledge and documentation as much as possible However w ith so much responsibility now delegated to a supplier the supplier assessment becomes even more important A regulated company is still ul timately accountable for demonstrating that a GxP system is compliant even if a supplier is responsible for parts of that system so the regulated customer needs to establish enough trust in their supplier The cloud service provider must be assessed to f irst determine if they can deliver the services offered but also to determine the suitability of their quality system and that it is systematically followed The supplier needs to show that they have a QMS and follow a documented set of procedures and st andards governing activities such as: • Infrastructure Qualification and Operation • Software Development • Change Management • Release Management • Configuration Management • Supplier Management • Training • System security Details of the AWS QMS are covered in the software section of this whitepaper The capabilities of AWS to satisfy these areas may be reassessed on a periodic basis typically by reviewing the latest materials available through AWS Artifact (ie AWS certifications and audit reports) It is also important to consider and plan how operational processes that span the shared responsibility model will operate For example how to manage changes made by AWS to services used a s part of your landing zone or applications incident response management in cases of outages or portability requirements should there be a need to change cloud service provider Regulated Landing Zone One of the main functions of the landing zone is to provide a solid foundation for development teams to build on and address as many regulatory requirements as possible thus removing the responsibility from the development teams Amazon Web Services GxP Systems on AWS 36 The GAMP IT Infrastructure Control and Compliance guidance document follows a platform based approach to the qualification of IT infrastructure which aligns perfectly with a customer’s need to qualify their landing zone AWS Control Tower provides the easiest way to set up and g overn a new secure multi account AWS environment based on best practices established through AWS’ experience working with thousands of enterprises as they move to the cloud See AWS Control T ower features for further details of what is included in a typical landing zone GAMP also describes two scenarios for approaching platform qualification 1 The first scenario is independent of any specific application and instead considers generic requireme nts for the platform or landing zone 2 The second scenario is where the requirements of the platform are derived directly from the applications that will run on the platform For many customers when first building their landing zone the exact nature of t he applications that will run on it is unclear Therefore this paper follows scenario 1 and approach es the qualification independent of any specific application The objective of the landing zone is to provide application teams with a solid foundation upo n which to build while addressing as many regulatory requirements as possible so the regulatory burden on the application team is reduced Tooling and Automation Many customers include common tooling and automation as part of the landing zone so it can be qualified and validated once and used by all development teams This common tooling i s often within the shared services account of the landing zone For example standard tooling around requirements management test management CI/CD etc need to be qualified and validated Similarly any automation of IT processes also needs to be validated For example it’s possible to automate the Installation Qualification (IQ) step of your Computer Systems Validation process Leveraging Managed Services Instead of building and operating a landing zone yourself you have the option of delegating this responsibility This delegation could be to AWS by making use of AWS Managed Services or to a partner within t he AWS Partner Network (APN) This means the service provider is responsible for building a landing zone based on AWS best practices operating it in accordance with industry best practices and providing suf ficient evidence to you in meeting your expectations Amazon Web Services GxP Systems on AWS 37 Building Blocks When it comes to the virtualized infrastructure and service instances supporting an application there are two approaches to take 1 Commission service instances for a specific applicatio n Each application team therefore takes care of their own qualification activities but possibly causing duplication of qualification effort across application/product teams 2 Define ‘building blocks’ to be used across all applications Create standard reusable building blocks that can be qualified once and used many times To reduce the overall effort and the increase developer productivity this paper assume s the use of option 2 A ‘building block’ could be a single AWS service such as Amazon EC2 or Ama zon RDS a combination of AWS services such as Amazon VPC and NAT Gateway or a complete stack such as a three tier web app or ML Ops stack The qualification of ‘building blocks’ follow s a process based on the GAMP IT Infrastructure Control and Compliance guidance document’s ‘92 Infrastructure Building Block Concept’ To accelerate application development you could create a library of these standardized and pre qualified building blocks which are made available to development teams to easily consume Computer System Validation With a solid and regulatory compliant foundation from the supplier assessment and landing zone you can look at improving your existing Computer Systems Validatio n (CSV) standard operating procedure (SOP) Most customers already have existing SOPs around Computer Systems Validation Many customers also state that their processes are old slow and very manual in nature and view moving to the cloud as an opportunity to improve these processes and automate as much as possible The ‘building block’ approach described earlier is already a great accelerator for development teams enabling them to stitch together pre qualified building blocks to form the basis of their app lication However the application team is still responsible for the Validation of their application including Installation Qualification (IQ) Again this is another area where customer approach varies Some customers follow existing processes and still g enerate documentation which is stored in their Enterprise Amazon Web Services GxP Systems on AWS 38 Document Management System Other customers have fully adopted automation and achieved ‘ near zero documentation’ by validating their tool chain and relying on the data stored in those tools as evide nce Validation During Cloud Migration One important point that may be covered in a Qualification Strategy is the overarching approach to Computer System Validation (CSV) during migration If you are embarking on a migration effort part of the analysis of the application portfolio will be to identify archetypes or groups of applications with similar architectures A single runbook can be developed and then repeated for each of the applications in the group speeding up migration At this point if the app lications are GxP relevant the CSV/migration strategy can also be defined for the archetype and repeated for each application Supplier Assessment and Cloud Management As mentioned earlier gaining trust in a Cloud Service Provider is critical as you will be inheriting certain cloud infrastructure and security controls from the Cloud Service Provider The approach described by industry guidance involves several steps whi ch we will cover here Basic Supplier Assessment The first (optional) step is to perform a basic supplier assessment to check the supplier’s market reputation knowledge and experience working in regulated industries prior experience working with other re gulated companies and what certifications they hold You can leverage industry assessments such as Gartner’s assessment on the AWS News Blog post AWS Named as a Cloud Leader for the 10th Consecutive Year in Gartner’s Infrastructure & Platform Services Magic Quadrant and customer testimonials Documentation Review A supplier assessment often include s a deep dive into the assets available from the supplier describing their QM S and operations This includes reviewing certifications audit reports and whitepapers For more information see the AWS Risk and Compliance whitepape r Amazon Web Services GxP Systems on AWS 39 AWS and its customers share control over the IT environment and both parties have responsibility for managing the IT environment The AWS part in this shared responsibility inc ludes providing services on a highly secure and cont rolled platform and providing a wide array of security features customers can use The customer’s responsibility includes configuring their IT environments in a secure and controlled manner for their purposes While customers don’t communicate their use and configurations to AWS AWS does communicate its security and control environment relevant to customers AWS does this by doing the following: • Obtaining industry certifications and independent third party attestations • Publishing information about the AWS security and control pra ctices in whitepapers and web site content • Providing certificates reports and other documentation directly to AWS customers under NDA (as required) For a more detailed description of AWS Security see AWS Cloud Security AWS Artifact provides on demand access to AWS security and compliance reports and select online agreements Reports available in AWS Artifact include our Service Organization Control (SOC) reports Payment Card Industry (PCI) reports and certifications from accredita tion bodies across geographies and compliance verticals that validate the implementation and operating effectiveness of AWS security controls Agreements available in AWS Artifact include the Business Associate Addendum (BAA) and the Nondisclosure Agreemen t (NDA) For a more detailed description of AWS Compliance see AWS Compliance If you have additional questions about the AWS certifications or the compliance documentation AWS makes available please bring those questions to your account team Review Service Level Agreements (SLA) AWS offers service level agreements for certain AWS services Further information can be found under Service Level Agreements (SLAs) Audit Mail Audit – To supplement the AWS documentation you have gathered a mail audit questionnaire (sometimes referred to as a supplier questionnaire) may be submitted to AWS to gather additional information or to ask cla rifying questions You should work with your account team to request a mail audit Amazon Web Services GxP Systems on AWS 40 Onsite Audit – AWS regularly undergoes independent third party attestation audits to provide assurance that control activities are operating as intended Currently AWS participates in over 50 different audit programs The results of these audits are documented by the assessing body and made available for all AWS customers through AWS Artifact These thirdparty attestations and certifications of AWS provide you with visibility and independent validation of the control en vironment eliminating the need for customers to perform individual onsite audits Such attestations and certifications may also help relieve you of the requirement to perform certain validation work yourself for your IT environment in the AWS Cloud For d etails see the AWS Quality Management System section of this whitepaper Contractual Agreement Once you have completed a supplier assessment of AWS the next step is to set up a contractual agreement for using AWS services The AWS Customer Agreement is available at: https://awsamazoncom/agreement/ ) You are responsible for interpreting regulations and determining whether the appropriate requirements are included in a contract with standard terms If you have any questions about entering into a service agreement with AWS please contact your account team Cloud Management Processes There are certain processes that span the shared responsibility model and typically must be captured in your QMS in the form of SOPs and work instructions Change Management Change Management is a bidirectional process when dealing with a cloud service provider On the one hand AWS is co ntinually making changes to improve its services as mentioned earlier in this paper On the other hand you can make feature requests which is highly encouraged as 90% of the AWS service features are as a result of direct customer feedback Customers typically use a risk based approach appropriate fo r the type of change to determine the subsequent actions Changes to AWS services which add functionality are not usually a concern because no application will be using that new functionality yet However new functionality may trigger an internal assessme nt to determine if it affects the risk profile of the service and Amazon Web Services GxP Systems on AWS 41 should be allowed for use If mandated by your QMS this may trigger a re qualification of building blocks prior to allowing the new functionality Deprecations are considered more critical because they could break an application A deprecation may include a thirdparty library utility or version of languages such as Python The deprecation of a service or feature is rare Once you receive the notification of a deprecation you should trigger an impact assessment If an impact is found the application teams should plan changes to remediate the impac t The notice period for a deprecation should allow for time for assessme nt and remediation AWS will also help you understand the impact of the change There are other changes such as enhancements and bug fixes which do not change the functionality of th e service and do not trigger notifications to customers These types of changes are synonymous with “standard” changes in ITIL which are usually pre authorized low risk relatively common and follow a specific procedure If you want to generate evidence s howing no regression is introduced due to this class of change you could create a test bed which repeatedly tests the AWS services to detect regression Should a problem be uncovered it should immediately be reported to AWS for resolution Incident Manag ement The Amazon Security Operations team employs industry standard diagnostic procedures to drive resolution during business impacting events Staff operators provide 24x7x365 coverage to detect incidents and to manage the impact and resolution As part o f the process potential breaches of customer content are investigated and escalated to AWS Security and AWS Legal Affected customers and regulators are notified of breaches and incidents where legally required You can subscribe to the AWS Security Bulletins page ( https://awsamazoncom/security/security bulletins ) which provides information regarding identified security issues You can subscribe to the Security Bulletin RSS Feed to keep abreast of security announcements on the Security Bulletin webpage You are responsible for reporting incidents involving your storage virtual machines and applications unless the incident is caused by AWS For more information refer to the AWS Vulnerability Reporting webpage: https://awsamazoncom/security/vulnerability reporting/ Customer Support AWS develops and maintains customer supp ort procedures that include metrics to verify performance When you contact AWS to report that AWS services do not meet Amazon Web Services GxP Systems on AWS 42 their quality objectives your issue is investigated and where required commercially reasonable actions are taken to resolve it Where AWS is the first to become aware of a customer impacting issue procedures exist for notifying impacted customers according to their contract requirements and/or via the AWS Service Health Dashboard http://sta tusawsamazoncom/ You should ensure that your policies and procedures align to the customer support options provided by AWS Additional details may be found in the Customer Complaints and Customer Training sections in this document Cloud Platform/Landing Zone Qualification A landing zone such as the one created by AWS Control Tower is a well architected multi account AWS environment that's based on security and compliance best practices The landing zone includes capabilities for centralized logging security account vending and core network connectivity We recommend that you then build features into the landing zone to satisfy as many regulatory requirements as possible and to effectively remove the burden from the development teams which build on it The objective of the landing zone and the team owning it should be to prov ide the guardrails and features that free the developers to use the ‘right tools for the job’ and focus on delivering differentiated business value rather than on compliance For example account vending could be extended to include account bootstrapping t o automatically direct logs to the central logging account drop default VPCs and instantiate an approved VPC (if needed at all) deploy baseline stack sets and establish standard roles to support things like automated installation qualification (IQ) The Shared Services account would house centralized capabilities and automations such as the mentioned automation of IQ The centralized logging account could satisfy regulatory requirements around audit trails including for example record retention through the use of lifecycle policies The addition of a backup and archive account could provide standard backup and restore along with archiving services for application teams to use Similarly a standardized approach to disaster recovery (DR) can be provided by the landing zone using tools like CloudEndure Disaster Recovery If you follow AWS guidance and implement a Cloud Center of Excellence (CCoE) and consider the landing zone as a product the CCoE team takes on the responsibility of building these capabilities into the landing zone to satisfy regulatory requirements Amazon Web Services GxP Systems on AWS 43 The number of capabilities built into the la nding zone is often influenced by the organizational structure around it If you have a traditional structure with a divide between development teams and infrastructure tasks like server and network management are centralized and these capabilities are built into the platform If you adopt a product centric operating model the development teams become more autonomous and responsible for more of the stack perhaps even the entire stack from the VPC and everything built on it Also consider with serverless architectures you may not need a VPC because there are no servers to manage This underlying cloud platform when supporting GxP applications should be qualified to demonstrate proper configuration and to ensure that a state of control and compliance is maintained The qualification of the cloud can follow a traditional infrastructure qualification project which include s the planning specification and design risk assessment qualification test planning installation qualification (IQ) operational qualif ication (OQ) and handover (as described in Section 5 of GAMP IT Qualification of Platforms) The components (configuration items) that make up the landing zone should all be deployed through automated means ie an automated pipeline This approach support s better change management going forward After the completion of the infrastructure project and the creation of the operations and maintenance SOPs you have a qualified cloud platform upon which GxP workloads can run The SOPs cover topic s such as account provisioning access management change management and so on Maintaining the Landing Zone’s Qualified State Once the landing zone is live it must be maintained in a qualified state Unless the operations are delegated to a partner you typically create a Cloud Platform Operations and Maintenance SOP based on Section 6 of GAMP IT Infrastructure Control and Compliance According to GAMP there are several areas where control must be shown such as change management configuration managemen t security management and others GAMP guidance also suggests that ‘automatic tools’ should be used whenever possible The following sections cover these control areas and how AWS services can help with automation Change Management Change Management processes control how changes to configuration items are made These processes should include an assessment of the potential impact on the GxP Amazon Web Services GxP Systems on AWS 44 applications supported by the landing zone As mentioned earlier all of the landing zone components are deployed using an automated pipeline Therefore once a change has been approved and committed in the source code repository tool like AWS CodeCommit the pipeline is triggered and the change deployed There will likely be multiple pipelines for the va rious parts that make up the landing zone The landing zone is made up of infrastructure and automation components Now through the use of infrastructure as code there is no real difference between how these different components are deployed We recommen d a continuous deployment methodology because it ensures changes are automatically built tested and deployed with the goal of eliminating as many manual steps as possible Continuous deployment seeks to eliminate the manual nature of this process and au tomate each step allowing development teams to standardize the process and increase the efficiency with which they deploy code In continuous deployment an entire release process is a pipeline containing stages AWS CodePipeline can be used along with AW S CodeCommit AWS CodeBuild and AWS CodeDeploy For customers needing additional approval steps AWS CodePipeline also supports the inclusion of manual steps All changes to AWS services either manual or automated are logged by AWS CloudTrail AWS CloudTrail is a service that enables governance compliance operational auditing and risk auditing of your AWS account With CloudTrail you can log continuously monitor and retain account activity related to actions across your AWS infrastructure CloudTrail provides event history of your AWS account activity including actions taken through the AWS Management Console AWS SDKs command line tools and other AWS services This event history simplifies secur ity analysis resource change tracking and troubleshooting In addition you can use CloudTrail to detect unusual activity in your AWS accounts These capabilities help simplify operational analysis and troubleshooting Of course customers also want to b e alerted about any unauthorized and unintended changes You can use a combination of AWS CloudTrail and AWS CloudWatch to detect unauthorized changes made to the production environment and even automate immediate remediation Amazon CloudWatch is a monitoring service for AWS Cloud resources and can be used to trigger responses to AWS CloudTrail events (https://docsawsamazoncom/awscloudtrail/latest/userguide/cloudwatch alarms for cloudtrailhtml ) Amazon Web Services GxP Systems on AWS 45 Configuration Management Going hand in hand wi th change management is configuration management Configuration items (CIs) are the components that make up a system and CIs should only be modified through the change management process Infrastructure as Code brings automation to the provisioning process through tools like AWS CloudFormation Rather than relying on manually performed steps both administrators and develope rs can instantiate infrastructure using configuration files Infrastructure as Code treats these configuration files as software code These files can be used to produce a set of artifacts namely the compute storage network and application services tha t comprise an operating environment Infrastructure as Code eliminates configuration drift through automation thereby increasing the speed and agility of infrastructure deployments AWS Tagging and Resource Groups lets you organize your AWS landscape by applying tags at different levels of granularity Tags allow you to label collect and organize resources and components within services The Tag Editor lets you manage tags across services and AWS Regions Using this approach you can globally manage all the application business data and technology components of your target landscape A Resource Group is a collection of resources that share one or more tags It can be used to create an enterprise architecture view of your IT landscape consolidating AWS resources into a per project (that is the on going programs that realize your target landscape) per entity (that is capabilities roles processes) and per domain (that is Business Application Data Technology) view AWS Config is a service that lets you assess audit and evaluate the configurations of AWS resources AWS Config continuously monitors and records your AWS resource configurations and lets you automate the evaluation of recorded configurations against desired configurations With AWS Config you can review changes in configurations and determine their overall compliance against the configurations specified in your internal guidelines This enable s you to simplify compliance auditing security analysis change management and operational troubleshooting In addition AWS provides conformance packs for AWS Config to provide a general purpose compliance framework designed to enable you to create security operational or cost optimization governance checks using managed or custom AWS Config rules and AWS Conf ig remediation actions including a conformance pack for 21 CFR 11 You can use AWS CloudFormation AWS Config Tagging and Reso urce Groups to see exactly what cloud assets your company is using at any moment These services Amazon Web Services GxP Systems on AWS 46 also make it easier to detect when a rogue server or shadow application appear in your target production landscape Security Management AWS has defined a set o f best practices for customers who are designing the security infrastructure and configuration for applications running in Amazon Web Services (AWS) These AWS resources provides security best practices that will help you define your Information Security Management System (ISMS) and build a set of security policies and processes for your organization so you can protect your data and assets in the AWS Cloud These AWS resources also provide an overview of different security topics such as identifying categorizing and protecting your assets on AWS managing access to AWS resources using accounts users and groups and suggesting ways you can secure your data operating systems applications and overall infrastructure in the cloud AWS provides you with an extensive set of tools to secure workloads in the cloud If you implement full automation it could negate the need for anyone to have direct access to any environment beyond development However if a situation occurs that requires someone to access a production environment they must explicitly request access have the access reviewed and approved by the appropriate owner and upon approval obtain temporary access with the least privileg e needed and only for the duration required You should then track their activities through logging while they have access You can refer to this AWS resource for fu rther information Problem and Incident Management With AWS you get access to many tools and features to help you meet your problem and incident management objectives These capabilities help you establish a configuration and security baseline that meets your objectives for your applications running in the cloud When a deviation from your baseline does occur (such as by a mis configuration) you may need to respond and investigate To successfully do so you must understand the basic concepts of security incident response within your AWS environment as well as the issues needed to consider to prepare educate and train your cloud teams before security issues occur It is important to know which controls and capabilities you can use to review topical examples for resolving potential concerns and to identify remediation methods that can be used to leverage automation and impro ve response speed Amazon Web Services GxP Systems on AWS 47 Because security incident response can be a complex topic we encourage you to start small develop runbooks leverage basic capabilities and create an initial library of incident response mechanisms to iterate from and improve upon Th is initial work should include teams that are not involved with security and should include your legal departments so that they are better able to understand the impact that incident response (IR) and the choices they have made have on your corporate go als For a comprehensive guide see the AWS Security Incident Response Guide Backup Restore Archiving The ability to back up and restore is required for all validat ed applications It is therefore a common capability that can be centralized as part of the regulated landing zone Backup and restore should not be confused with archiving and retrieval but the two areas can be combined into a centralized capability For a cloud based backup and restore capability consider AWS Backup AWS Backup is a fully managed backup service that makes it easy to centralize and automate the backup of data across AWS services Using AWS Backup you can centrally configure backup policies and monitor backup activity for AWS resources such as Amazon E BS volumes Amazon EC2 instances Amazon RDS databases Amazon DynamoDB tables Amazon EFS file systems Amazon FSx file systems and AWS Storage Gateway volumes AWS Backup automates and consolidates backup tasks previously performed service byservice r emoving the need to create custom scripts and manual processes With just a few clicks in the AWS Backup console you can create backup policies that automate backup schedules and retention management AWS Backup provides a fully managed policy based back up solution simplifying your backup management enabling you to meet your business and regulatory backup compliance requirements Disaster Recovery In traditional on premises situations Disaster Recovery (DR) involve s a separate data center located a cer tain distance from the primary data center This separate data center only exists in case of a complete disaster impacting the primary data center Often the infrastructure at the DR site sits idle or at best host s preproduction instances of applications thus running the risk of it being out ofsync with production With the advent of cloud DR is now much easier and cheaper The AWS global infrastructure is built around AWS Regions and Availability Zones (AZ) AWS Regions provide multiple physically sepa rated and isolated Availability Zones which are connected with low latency high throughput and highly redundant Amazon Web Services GxP Systems on AWS 48 networking With Availability Zones you can design and operate applications and databases that automatically fail over between Availability Zones without interruption Availability Zones are more highly available fault tolerant and scalable than traditional single or multiple data center infrastructures With AWS Availability Zones it is very easy to create a multi AZ architecture capable o f withstanding a complete failure of one or more zones For even more resilience multiple AWS Regions can be used With the use of Infrastructure as Code the infrastructure and applications in a DR Region do not need to run all of the time In case of a disaster the entire application stack can be deployed into another Region The only components that must run all the time are those keeping the data repositories in sync With tooling like CloudEndure Disaster Recovery you can now automate disaster recovery Performance Monitoring Amazon CloudWatch is a monitoring service for AWS Cloud resources and the applications you run on AWS You can use CloudWatch to collect and track metrics collect and monitor log files set alarms and automatically react to changes in customer AWS resources CloudWatch monitors and logs the behavior of the customer application landscape CloudWatch can also trigger events based on the behavior of your application Qualifying Building Blocks Customers frequently want to know how AWS gives developers freedom to use any AWS service while still maintaining regulatory compliance and fast development To address this problem you can leverage technology but this also involves changes in process design to move away from blocking steps and towards guardrails The changes required to your processes and IT operating model is beyond the scope of this whitepa per However we cover the core steps of a supporting process to qualify building blocks which is one tactic for maintaining regulatory compliance more efficiently The infrastructure building block concept as defined by GAMP is an approach to qualify individual components or combinations of components which can then be put together to build out the IT infrastructure The approach is applicable to AWS services The benefit of this approach is that you can qualify one instance of a building block once and a ssume all the other instances will perform the same way reducing the overall effort across applications The approach also enables customers to change a building block Amazon Web Services GxP Sys tems on AWS 49 without needing to re qualify all of the others or revalidate the applications dependen t upon the infrastructure Service Approval Service approval is a technique used by many customers as part of architecture governance that is it’s used across regulated and non regulated workloads Customers often consider multiple regulations when appro ving a service for use by development teams For example you may allow all services to be used in sandbox accounts but may restrict the services in an account to only HIPAA eligible services if the application is subject to HIPAA regulations Service app roval is implemented through the use of AWS Organizations and Service Control Policies You could take this approach to allow services to be used as part of GxP relevant applications For example a combination of ISO PCI SOC and HIPAA eligibility may provide sufficient confidence Sometimes customers want to implement automated controls over the approved service as described in Approving AWS services for GxP workloads You may prefer to follow a more rigorous qualification process like the following building block qualification Building Block Qualification The qualification of AWS service building blocks follow s a process based on the GAMP IT Infrastructure Control and Compliance guidance documents ‘Infrastructure Building Block Concept’ (Section 9 / Appendix 2 of GAMP IT) According to EU GMP the definition of qualification is: “Action of proving that any equipment works correctly and actually leads to the expected results” The equipment also needs to continue to lead to the expected results over it s lifetime In other words your process should show that the building block works as intended and is kept under control throughout its operational life There will be written procedures in place and when executed records will show that the activities ac tually occurred Also the staff operating the services need to be appropriately trained This process is often described in an SOP describing the overall qualification and commissioning strategy the scope roles and responsibilities a deliverables list and any good engineering practices that will be followed to satisfy qualification and commissioning requirements Amazon Web Services GxP Systems on AWS 50 With the number of AWS services it can be difficult for you to qualify all AWS services at once An iterative and risk based approach is recommended where services are qualified in priority order Initial prioritization will take into account the needs of the first applications moving to cloud and then the prioritization can be reass essed as demand for cloud services increases Design Stage Requirements The first activity is to consider the requirements for the building block One approach is to look at the service API definition Each AWS service has a clearly documented API describi ng the entire functionality of that service Many service APIs are extensive and support some advanced functionality However not all of this advanced functionality may be required initially so any existing business use cases can be considered to help refine the scope For example when noting Amazon S3 requirements you include the core functionality of creating/deleting buckets and the ability to put/get/delete objects However you may not include the lifecycle policy functionality because this function ality is not yet needed These requirements are captured in the building block requirements specification / requirements repository It’s also important to consider non functional requirements To ensure suitability of a service you can look at the service s SLA and limits Gap Analysis Where application requirements already exist in the same way you can restrict the scope you can also identify any gaps Either the gap can be addressed by including more functionality for the building block like bringing t he Amazon S3 Bucket Lifecycle functionality into scope or the service is not suitable for satisfying the requirements and an alternate building block should be used If no other service seems to meet the requirements you can custom develop a service or make a feature request to AWS for service enhancement Risk Assessment Infrastructure is qualified to ensure reliability security and business continuity for the validated applications running on it These three dimensions are usually included as part of any risk assessment The published AWS SLA provides confidence in AWS services reliability Data regarding the current status of the service plus historical Amazon Web Services GxP Systems on AWS 51 adherence to SLAs is available from https://statusa wsamazoncom For confidence in security the AWS certifications can be checked for the relevant service For business continuity AWS builds to guard against outages and incidents and accounts for them in the design of AWS services so when disruptions do occur their impact on customers and the continuity of services is as minimal as possible This step is also not only for GxP qualification purposes The risk assessment should include any additional check s for other regulations such as HIPAA When assessing the risks for a cloud service it’s important to consider the relationship to other building blocks For example an Amazon RDS database may have a relationship to the Amazon VPC building block because you decided a database is only allowed to exist within the private subnet of a VPC Therefore the VPC is taking care of many of the risks around access control These dependencies will be captured in the risk assessment and then focus on additional risks s pecific to the service or residual risks which cannot be catered for by the surrounding production environment Each cloud service building block goes through a risk assessment that identifies a list of risks For each identified risk a mitigation plan is created The mitigation plan can influence one or more of the following components : • Service Control Policy • Technical Design/Infrastructure as Code Template • Monitoring & Alerting of Automated Compliance Controls A risk can be mitigated through the use of Service Control Policies (SCPs) where a service or specific operation is deemed too risky and its use explicitly denied through such a policy For example you can use an SCP to restrict the deletion of an Amazon S3 object through the AWS Management Consol e Another option is to control service usage through the technical design of an approved Infrastructure as Code (IaC) template where certain configuration parameters are restricted or parameterized For example you may use an AWS CloudFormation template to always configure an Amazon S3 bucket as private Finally you can define rules that feed into monitoring and alerting For example if the policy states Amazon S3 buckets cannot be public but this configuration is not enforce d in the infrastructure tem plate then the infrastructure can be monitored for any public Amazon S3 buckets When an S3 bucket is configured as public an alert trigger s remediation such as immediately changing a bucket to private Technical Design In response to the specified requ irements and risks an architecture design specification will be created by a Cloud Infrastructure Architect describing the logical service building Amazon Web Services GxP Systems on AWS 52 block design and traceability from risk or requirement to the design This design specification will among other things describe the capabilities of the building block to the end users and application development teams Design Review To verify that the proposed design is suitable for the intended purpose within the surrounding IT infrastructure design a design review can be performed by a suitably trained person as a final check Construction Stage The logical design may be captured in a document but the physical design is captured in an Infrastructure as Code (IaC) template like a n AWS CloudFormation template This IaC template is always used to deploy an instance of the building block ensuring consistency For one approach see the Automating GxP compliance in the cloud: Best practices and architecture guidelines blog post The IaC template will u se parameters to deal with workload variances As part of the design effort it will be determined often by IT Quality and Security which parameters affect the risk profile of the service and so should be controlled and which parameters can be set by the user For example the name of a database can be set by the template user and generally does not affect the risk profile of a database service However any parameter controlling encryption does affect the risk profile and therefore is fixed in the templa te and not changeable by the template user The template is a text file that can be edited However the rules expressed in the template are also automated within the surrounding monitoring and alerting For example the rule stating that the encryption se tting on a database must be set can be checked by automated rules Therefore a developer may override the encryption setting in the development environment but that change isn’t allowed to progress to a validated environment or beyond At this point automated test scripts can be prepared for executing during the qualification step to generate test evidence The author of the automated tests must be suitably trained and a separate and suitably trained person perform s a code review and/or random testing of the automated tests to ensure the quality level The automated tests ensure the building block initially functions as expected These tests can be run again to ensure the building block continues to function as expected especially after any change Howev er to ensure nothing has changed once in production you should identify and create automated controls Using the Amazon S3 example again all buckets should be private If a public bucket is detected it can be Amazon Web Services GxP Systems on AWS 53 switched back to private and an alert raised and notification sent You can also determine the individual that created the S3 bucket and revoke their permissions The final part of construction is the authoring and approval of any needed additio nal guidance and operations manuals For example how to recover a database would be included in the operations manual of an Amazon RDS building block Qualification and Commissioning Stage It’s important to note that infrastructure is deployed in the same way for every building block ie through AWS CloudFormation using an Infrastructure as Code template Therefore there is usually no need for building block specific installation instructions Also you are confident that every deployment is done according to specification and has the correct configuration Automated Testing If you want to generate test evidence you can demonstrat e that the functional requirements are fulfilled and that all identifi ed risks have been mitigated thus indicating the building block is fit for its intended use through the execution of the automated tests created during construction The output of these automated tests are captured into a secure repository and can be use d as test evidence This automation deploy s the building block template into a test environment execute s the automated tests capture s the evidence and then destroy s the stack again avoiding any ongoing costs Testing may only make sense in combination with other building blocks For example the testing of a NAT gateway can only be done within an existing VPC One alternative is to test within the context of standard archetypes ie a complete stack for a typical application architecture Handover to Operations Stage The handover stage ensures that the cloud operation team is familiar with the new building block and is trained in any service specific operations Once the operations team approves the new building block the service can be app roved by changing a Service Control Policy (SCP) The Infrastructure as Code template can be made available for use by adding it into the AWS Service Catalog or other secure template repository If the response to a risk was a SCP or Monitoring Rule change then the process to deploy those changes are triggered at this stage Amazon Web Services GxP Systems on AWS 54 Computer Systems Validation (CSV) You must still perform computer systems validation activities even if an application is running in the cloud In fact the overarching qualification strategy we have laid out in this paper has ensured that this CSV process can fundamentally be the same as before and hasn’t become more difficult for the application development teams through the introd uction of cloud technologies However with the solid foundation provided by AWS and the regulated landing zone we can shift the focus to improving a traditional CSV process You typically have a Standard Operating Procedure ( SOP ) describing your Software Development Lifecycle (SDLC ) which is often based on GAMP 5: A Risk Based Approach to Compliant GxP Computerized Systems Many SOPs we have seen involve a lot of manual work and approvals which slow down the process The more automation that can be introduced the quicker the process and the lower the chances of human error The automation of IT processes is nothing new and customers have been implementing automated toolchains for years for on premises development The move to cloud provides all those same capabilities but also introduces some additional opportunities especially in the virtualized infrastructure areas In this section we will focus primarily on those additional capabilities now available through the cloud Automating Installatio n Qualification (IQ) It’s important to note that even though we are qualifying the underlying building blocks the application teams still need to validate their application including performing the installation qualification (IQ) as part o f their normal CSV activities in orde r to demonstrate their application specific combination of infrastructure building blocks was deployed and is functioning as expected However they can focus on testing the interaction between building blocks rather than the functionality of each building block itself As mentioned the automation of the development toolchain is nothing new to any high performing engineering team The use of CI/CD and automated testing tools has been around for a long time What hasn’t been possible before is the fully aut omated deployment of infrastructure and execution of the Installation Qualification (IQ) step The use of Infrastructure as Code opens up the possibility to automate the IQ step as described in this blog post The controlled infrastructure template acts as the pre Amazon Web Services GxP Systems on AWS 55 approved specification which can be compared against the stacks deployed by AWS CloudFormation Summary reports and test evidence can be created or if a deviation is found the stack can be rolled back to the last known good state Assuming the IQ step completes successfully the automation can continue to the automation of Operational Qualification (OQ) and Performance Qualification (PQ) Maintainin g an Application ’s Qualified State Of course once an application has been deployed it needs to be maintained under a state of control However a lot of the heavy lifting for things like change management configuration management security management b ackup and restore have been built into the regulated landing zone for the benefit of all application teams Conclusion If you are a Life Science customer with GxP obligations you retain accountability and responsibility for your use of AWS products inclu ding the applications and virtualized infrastructure you develop validate and operate using AWS Products Using the recommendations in this whitepaper you can evaluate your use of AWS products within the context of your quality system and consider strat egies for implementing the controls required for GxP compliance as a component of your regulated products and systems Contributors Contributors to this document include : • Sylva Krizan PhD Security Assurance AWS Global Healthcare and Life Sciences • Rye Ro binson Solutions Architect AWS Global Healthcare and Life Sciences • Ian Sutcliffe Senior Solutions Architect AWS Global Healthcare and Life Sciences Further Reading For additional information see: • AWS Compliance • Healthcare & Life Sciences on AWS Amazon Web Services GxP Systems on AWS 56 Document Revisions Date Description March 2021 Updated to include more elements of AWS Quality System Information and updated guidance on customer approach to GxP compliance on AWS January 2016 First publication Amazon Web Services GxP Systems on AWS 57 Appendix: 21 CFR 11 Controls – Shared Responsibility for use with AWS services Applicability of 21 CFR 11 to regulated medical products and GxP systems are the responsibility of the customer as determined by the intended use of the system(s) or product(s) AWS has mapped some of these requirements based on the AWS Shared Responsibility Model ; however customers are responsible for meeting their own regulatory obligations Below we have identified each subpart of 21 CFR 11 and clarified areas where AWS services and operations and the customer share responsibility in order to meet 21 CFR 11 requirements 21 CFR Subpart AWS Responsibility Customer Responsibility 1110 Controls for closed systems Persons who use closed systems to create modify maintain or transmit electronic records shall employ procedures and controls designed to ensure the authenticity integrity and when appropriate the confidentiality of electronic records and to ensure that the signer cannot readily repudiate the signe d record as not genuine Such procedures and controls shall include the following: Amazon Web Services GxP Systems on AWS 58 1110(a) Validation of systems to ensure accuracy reliability consistent intended performance and the ability to discern invalid or altered records AWS services are b uilt and tested to conform to IT industry standards including SOC ISO PCI and others https://awsamazoncom/compliance/programs/ AWS compliance programs and reports provide objective evidenc e that AWS has implemented several key controls including but not limited to: Control over the installation and operation of AWS product components including both software components and hardware components; Control over product changes and configuratio n management; Risk management program; Management review planning and operational monitoring; Security management of information availability integrity and confidentiality; and Data protection controls including mechanisms for data backup restore and archiving All purchased materials and services intended for use in production processes are documented and documentation is reviewed and approved prior to use and verified to be in conformance with the specifications Final inspection and testing is perf ormed on AWS services prior to their release to general availability The final service release review procedure includes a verification that all acceptance data is present and that all product requirements were met Once in production AWS services underg o continuous performance monitoring In addition AWS’s significant customer base authorization for use by government agencies AWS products are basic building blocks that allow you to create private virtualized infrastructure environments for your custom software applications and commercial offthe shelf applications In this way you remain responsible for enabling (ie installing) configuring and operating AWS products to meet your data application and industry specific needs like GxP software validation and GxP infrastructure qualification as well as validation to support 21 CFR Part 11 requirements AWS products are however unlike traditional infrastructure software products in that they are highly automatable allowing you to programmatically create qualified infrastructure via version controlled JSON[1] scripts instead of manually executed paper p rotocols where applicable This automation capability not only reduces effort it increases control and consistency of the infrastructure environment such that continuous qualification [2] is possible Installation qualification of AWS services into your environment operational and performance qualification (IQ/OQ/PQ) are your responsibility as are the validation activities to demonstrate that systems with GxP workloads managing electronic records are appropriate for the intended use and meet regulatory requirements Amazon Web Services GxP Systems on AWS 59 21 CFR Subpart AWS Responsibility Customer Responsibility and recognition by industry analysts as a leading cloud services provider are further evidence of AWS products delivering their documented functionality https://awsamazoncom/documentation/ Relevant SOC2 Common Criteria: CC12 CC14 CC32 CC71 CC72 CC73 CC74 1110(b) The ability to generate accurate and complete copies of records in both human readable and electronic form suitable for inspection review and copying by the agency Persons should contact the agency if there are any questions reg arding the ability of the agency to perform such review and copying of the electronic records Controls are implemented subject to industry best practices in order to ensure services provide complete and accurate outputs with expected performance committed to in SLAs; Relevant SOC2 Common Criteria: A11 AWS has a series of Security Best Practices (https://awsamazoncom/security/security resources/ ) and additional resources you may referen ce to help protect data hosted within AWS You ultimately will verify that electronic records are accurate and complete within your AWS environment and determine the format by which data is human and/or machine readable and is suitable for inspection by regulators per the regulatory requirements Amazon Web Services GxP Systems on AWS 60 (c) Protection of records to enable their accurate and ready retrieval throughout the records retention period Controls are implemented subject to industry best practices in order to ensure services provide com plete and accurate outputs with expected performance committed to in SLAs; Relevant SOC2 Common Criteria: A11 AWS has identified critical system components required to maintain the availability of our system and recover service in the event of outage Critical system components are backed up across multiple isolated locations known as Availability Zones and back ups are maintained Each Availability Zone is engineered to operate independently with high reliability Backups of critical AWS system components are monitored for successful replication across multiple Availability Zones Refer to the AWS SOC 2 Report C C A12 The AWS Resiliency Program encompasses the processes and procedures by which AWS identifies responds to and recovers from a major event or incident within our environment This program builds upon the traditional approach of addressing contingenc y management which incorporates elements of business continuity and disaster recovery plans and expands this to consider critical elements of proactive risk mitigation strategies such as engineering physically separate Availability Zones (AZs) and continu ous infrastructure capacity planning AWS service resiliency plans are periodically reviewed by members of the Senior Executive management team and the Audit Committee of the Board of Directors The AWS Business Continuity Plan outlines measures to avoid a nd lessen environmental disruptions It includes operational details AWS has a series of Security Best Practices (https://awsamazoncom/security/security resources/ ) and additional resources you may reference to help protect your data hosted within AWS You are responsible for implementation of appropriate security configurations for your environment to protect data integrity as well as ensure data and resources are only retrieved by appropriate permission You are also responsible for creating and testing record retention policies as well as backup and recovery processes You are responsible for properly configuring and using the Service Offerings and taking your own steps to maintain appropriate security protection and backup of your Customer Content which may include the use of encryption technology (to protect your content from unauthorized access) and routine archiving Using Service Offerings such as Amazon S3 Amazon Glacier and Amazon RDS in combination with replication and high availability configurations AWS's broad range of storage solutions for backup and reco very are designed for many customer workloads https://awsamazoncom/backup recovery/ AWS services provide you with capabilities to design for resiliency and maintain business continuity including the utilization of frequent server instance back ups data redundancy replication and the flexibility to place instances and store data within multiple geographic regions as well as across multiple Availability Zones within each region You need to architect your AWS usage to take advantage of multiple regions and availability zones Distributing applications across multiple availability zones provides the ability to remain Amazon Web Services GxP Systems on AWS 61 21 CFR Subpart AWS Responsibility Customer Responsibility about steps to take before during and after an event The Business Continuity Plan is supported by testing that includes simulations of different scenarios During and after testing AWS documents people and process performance corrective actions and lessons learned with the aim of continuous improvement AWS data centers are designed to anticipate and tolerate failure while maintaining service levels In case of failure automated pro cesses move traffic away from the affected area Core applications are deployed to an N+1 standard so that in the event of a data center failure there is sufficient capacity to enable traffic to be load balanced to the remaining sites Refer to the AWS S OC 2 Report CC31 CC32 A12 A13 resilient in the face of most failure modes including natural disasters or system failures The AWS cloud supports many popular disaster recovery (DR) architectures from “pilot light” environments that are ready to scale up at a moment’s notice to “hot standby” environments that enable rapid failover You are responsible for DR planning and testing Amazon Web Services GxP Systems on AWS 62 (d) Limiting system access to authorized individuals AWS implements both physical and logical security controls Physical access to all AWS data centers housing IT infrastructure components is restricted to authorized data cent er employees vendors and contractors who require access in order to execute their jobs Employees requiring data center access must first apply for access and provide a valid business justification These requests are granted based on the principle of least privilege where requests must specify to which layer of the data center the individual needs access and are time bound Requests are reviewed and approved by authorized personnel and access is revoked after the requested time expires Once granted admittance individuals are restricted to areas specified in their permissions Access to data centers is regularly reviewed Access is automatically revoked when an employee’s record is terminated in Amazon’s HR system In addition when an employee or contractor’s access expires in accordance with the approved request duration his or her access is revoked even if he or she continues to be an employee of Amazon AWS restricts logical user access priv ileges to the internal Amazon network based on business need and job responsibilities AWS employs the concept of least privilege allowing only the necessary access for users to accomplish their job function New user accounts are created to have minimal access User access to AWS systems requires approval from the authorized personnel and validation of the active user Access privileges to AWS systems are reviewed on a regular AWS provides you with the ability to configure and use the AWS service offerings in order to maintain appropriate security prot ection and backup of content which may include the use of encryption technology to protect your content from unauthorized access You maintain full control and responsibility for establishing and verifying configuration of access to your data and AWS acc ounts as well as periodic review of access to data and resources Using AWS Identity and Access Management (IAM) a web service that allows you to securely control access to AWS resources you must control who can access and use your data and AWS resource s (authentication) and what data and resources they can use and in what ways (authorization) IAM is a feature of all AWS accounts offered at no additional charge You will be charged only for use of other AWS services by your users https://awsamazoncom/iam/ IAM Best Practices can be found here: http://docsawsamazoncom/IAM/latest/UserG uide/best pract iceshtml Maintaining physical access to your facilities and assets is solely your responsibility Amazon Web Services GxP Systems on AWS 63 21 CFR Subpart AWS Responsibility Customer Responsibility basis When an employee no longer requires these privileges his or her access is revoked Refer to the AWS SOC 2 Report C12 C13 and CC61 66 to verify the AWS physical and logical security controls Amazon Web Services GxP Systems on AWS 64 (e) Use of secure computer generated timestamped audit trails to independently record the date and time of operator entries and actions that create mod ify or delete electronic records Record changes shall not obscure previously recorded information Such audit trail documentation shall be retained for a period at least as long as that required for the subject electronic records and shall be available f or agency review and copying AWS maintains centralized repositories that provide core log archival functionality available for internal use by AWS service teams Leveraging S3 for high scalability durability and availability it allows service teams to collect archive and view service logs in a central log service Production hosts at AWS are equipped with logging for security purposes This service logs all human actions on hosts including logons failed logon attempts and logoffs These logs are stored and accessible by AWS security teams for root cause analysis in the event of a suspected security incident Logs for a given host are also available to the team that owns that host A frontend log analysis tool is available to service teams to search their logs for operational and security analysis Processes are implemented to protect logs and audit tools from unauthorized access modification and deletion Refer to the AWS SOC 2 Report CC51 CC71 Verification and implementation of audit trails as well as back up and retention procedures of your electronic records are your responsibility AWS provides you with the ability to properly configure and use the Service Offerings in order to maintain appropriate audit trail and logging of data access use and modification (including prohibiting disablement of audit trail functionality) Logs within your control (described below) can be used for monitoring and detection of unauthorized changes to your data Using Service Offerings such as AWS CloudTrail AWS CloudWatch Logs and VPC Flow Logs you can monitor your AWS data operations in the cloud by getting a history of AWS API calls for your account including API calls made via the AWS Management Console the AWS SDKs the command line t ools and higher level AWS services You can also identify which users and accounts called AWS APIs for services that support AWS CloudTrail the source IP address the calls were made from and when the calls occurred You can integrate AWS CloudTrail into applications using the API automate trail creation for your organization check the status of your trails and control how administrators turn logging services on and off AWS CloudTrail records two types of events: (1) Management Events: Represent stan dard API activity for AWS services For example AWS CloudTrail delivers management events for API calls such as launching EC2 instances or creating S3 buckets (2) Data Events: Represent S3 object level API activity such as Get Put Delete and List Amazon Web Services GxP Systems on AWS 65 21 CFR Subpart AWS Responsibility Customer Responsibility actions https://awsamazoncom/cloudtrail/ https://awsamazoncom/documentation/cloudtr ail/ http://docsawsamazoncom/AmazonVPC/late st/UserGuide/flow logshtml (f) Use of operational system checks to enforce permitted sequencing of steps and events as appropriate Not appl icable to AWS – this requirement only applies to the customer’s system You are responsible for configuring establishing and verifying enforcement of permitted sequencing of steps and events within the regulated environment (g) Use of authority checks to ensure that only authorized individuals can use the system electronically si gn a record access the operation or computer system input or output device alter a record or perform the operation at hand Not applicable to AWS – this requirement only applies to the customer’s system AWS provides you with the ability to configure and use the AWS service offerings in order to maintain appropriate security protection and backup of content which may include the use of encryption technology to protect your content from unauthorized access You maintain full control and responsibility for establishing and verifying configuration of access to your data and AWS accounts as well as periodic review of access to data and resources Using AWS Identity and Access Management (IAM) a web service that allows you to securely control access to A WS resources you must control who can access and use your data and AWS resources (authentication) and what data and resources they can use and in what ways (authorization) IAM is a feature of all AWS accounts offered at no additional charge You will be charged only for use of other AWS services by your users https://awsamazoncom/iam/ IAM Best Practices can be found here: http://docsawsamazoncom/IAM/latest/UserG uide/best practiceshtml Amazon Web Services GxP Systems on AWS 66 21 CFR Subpart AWS Responsibility Customer Responsibility (h) Use of device (eg terminal) checks to determine as appropriate the validit y of the source of data input or operational instruction Not applicable to AWS – this requirement only applies to the customer’s system You are responsible for establishing and verifying the source of the data input into your system is valid whether ma nually or for example by enforcing only certain input devices or sources are utilized (i) Determination that persons who develop maintain or use electronic record/electronic signature systems have the education training and experience to perform t heir assigned tasks AWS has implemented formal documented training policies and procedures that address purpose scope roles responsibilities and management commitment AWS maintains and provides security awareness training to all information system u sers on an annual basis The policy is disseminated through the internal Amazon communication portal to all employees Relevant SOC2 Common Criteria: CC13 CC14 CC22 CC23 You are responsible for ensuring your AWS users — including IT staff developers validation specialists and IT auditors —review the AWS product documentation and complete the product training programs you have determined are appropriate for your personnel AWS products are extensively documen ted online https://awsamazoncom/documentation/ and a wide range of user training and certification resources are available including introductory labs videos self paced online courses instructor lead training and AWS Certification https://awsamazoncom/training/ Adequacy of training programs for your personnel as well as maintenance of documentation of personnel training and qualifications (such as training record job description and resumes) are your responsibility (j) The establishment of and adherence to written policies that hold individuals accountable and responsible for actions initiated under their electronic signatures in order to d eter record and signature falsification Not applicable to AWS – this requirement only applies to the customer’s system Establishment and enforcement of policies to hold personnel accountable and responsible for actions initiated under their electronic signatures is your responsibility including training and associated documentation (k) Use of appropriate controls over systems documentation including: Amazon Web Services GxP Systems on AWS 67 21 CFR Subpart AWS Responsibility Customer Responsibility (1) Adequate controls over the distribution of access to and use of documentation for system operation and maintenance AWS maintains formal documented policies and procedures that provide guidance for operations and i nformation security within the organization and the supporting AWS environments Policies are maintained in a centralized location that is only accessible by employees Security p olicies are reviewed and approved on an annual basis by Security Leadership and are assessed by third party auditors as part of our audits Refer to SOC2 Common Criteria CC22 CC23 CC53 You are responsible to establish and maintain your own controls over the distribution access and use of documentation and documentation systems for system operation and maintenance Amazon Web Services GxP Systems on AWS 68 21 CFR Subpart AWS Responsibility Customer Responsibility (2) Revision and change control procedures to maintain an audit trail that documents timesequenced development and modification of systems documentation AWS policies and procedures go through processes for appro val version control and distribution by the appropriate personnel and/or members of management These documents are reviewed periodically and when necessary supporting data is evaluated to ensure the document fulfills its intended use Revisions are re viewed and approved by the team that owns the document unless otherwise specified Invalid or obsolete documents are identified and removed from use Internal policies are reviewed and approved by AWS leadership at least annually or following a significa nt change to the AWS environment Where applicable AWS Security leverages the information system framework and policies established and maintained by Amazon Corporate Information Security AWS service documentation is maintained in a publicly accessible online location so that the most current version is available by default https://awsamazoncom/documentation/ Refer to the AWS SOC 2 Report CC23 CC34 CC67 CC81 You are responsible for changes to your computerized systems running within your AWS accounts System components must be authorized designed developed configured documented tested approved and implemented according to your security and availability com mitments and system requirements Using Service Offerings such as AWS Config you can manage and record your AWS resource inventory configuration history and configuration change notifications to enable security and governance AWS Config Rules also enab les you to create rules that automatically check the configuration of AWS resources recorded by AWS Config https://awsamazoncom/documentation/config/ Change records and associated logs within your environment may be retained according to your record retention schedule You are responsible for storing managing and tracking electronic documents in your AWS account and as part of your overall quality management system including maintaining an audit trail that documents time sequenced development and modification of systems documentation Amazon Web Services GxP Systems on AWS 69 21 CFR Subpart AWS Responsibility Customer Responsibility §1130 Controls for open systems Persons who use open systems to create modify maintain or t ransmit electronic records shall employ procedures and controls designed to ensure the authenticity integrity and as appropriate the confidentiality of electronic records from the point of their creation to the point of their receipt Such procedures a nd controls shall include those identified in §1110 as appropriate and additional measures such as document encryption and use of appropriate digital signature standards to ensure as necessary under the circumstances record authenticity integrity an d confidentiality Industry standard controls and procedures are in place to protect and maintain the authenticity integrity and confidentiality of customer data Refer to the AWS SOC 2 Report C11 C12 You are responsible for determining whether your use of AWS services within your environment meets the definition of an open or closed system and whether these requirements apply Refer to the responsibilities in §1110 above for more information for recommended procedures and controls Additional measure s such as document encryption and use of appropriate digital signature standards are your responsibility to maintain data integrity authenticity and confidentiality §1150 Signature manifestations (a) Signed electronic records shall contain information associated with the signing that clearly indicates all of the following: (1) The printed name of the signer; (2) The date and time when the signature was executed; and (3) The meaning (such as review approval responsibility or authorship) as sociated with the signature (b) The items identified in paragraphs (a)(1) (a)(2) and (a)(3) of this section shall be subject to the same controls as for electronic records and shall be included as part of any human readable form of the electronic record (such as electronic display or printout) Not applicable to AWS – this requirement only applies to the customer’s applications You are responsible for establishing and verifying that your applications meet the signed electronic records requirements iden tified Amazon Web Services GxP Systems on AWS 70 21 CFR Subpart AWS Responsibility Customer Responsibility §1170 Signature/ record linking Electronic signatures and handwritten signatures executed to electronic records shall be linked to their respective electronic records to ensure that the signatures cannot be excised copied or otherwise transferred to falsify an electronic record by ordinary means Not applicable to AWS – this requirement only applies to the customer’s applications You are responsible for establishing and verifying that your application s/systems meet the signature/record linking requirements identified including any required policies and procedures Subpart C —Electronic Signatures §11100 General requirements (a) Each electronic signature shall be unique to one individual and shall no t be reused by or reassigned to anyone else Not applicable to AWS – this requirement only applies to the customer’s applications You are responsible for establishing and verifying that your applications/systems meet the general electronic signature re quirements identified including any required policies and procedures to enforce electronic signature governance (b) Before an organization establishes assigns certifies or otherwise sanctions an individual's electronic signature or any element of su ch electronic signature the organization shall verify the identity of the individual Not applicable to AWS – this requirement only applies to the customer’s applications You are responsible for establishing and verifying that your applications/systems meet the general electronic signature requirements identified including any required policies and procedures to verify individual identity prior to use of an electronic signature Amazon Web Services GxP Systems on AWS 71 21 CFR Subpart AWS Responsibility Customer Responsibility (c) Persons using electronic signatures shall prior to or at the time of such use certify to the agency that the electronic signatures in their system used on or after August 20 1997 are intended to be the legally binding equivalent of traditional handwritten signatures (1) The certification shall be submitted in paper fo rm and signed with a traditional handwritten signature to the Office of Regional Operations (HFC 100) 5600 Fishers Lane Rockville MD 20857 (2) Persons using electronic signatures shall upon agency request provide additional certification or testimony that a specific electronic signature is the legally binding equivalent of the signer's handwritten signature Not applicable to AWS – this requirement only applies to the customer’s applications You are responsible for establis hing and verifying that your applications/systems meet the general electronic signature requirements identified including determining whether any required notification to the agency is required and documenting accordingly §11200 Electronic signature c omponents and controls (a) Electronic signatures that are not based upon biometrics shall: Not applicabl e to AWS – this requirement only applies to the customer’s applications Amazon Web Services GxP Systems on AWS 72 21 CFR Subpart AWS Responsibility Customer Responsibility (1) Employ at least two distinct identification components such as an identification code and password (i) When an individual executes a series of signings duri ng a single continuous period of controlled system access the first signing shall be executed using all electronic signature components; subsequent signings shall be executed using at least one electronic signature component that is only executable by a nd designed to be used only by the individual (ii) When an individual executes one or more signings not performed during a single continuous period of controlled system access each signing shall be executed using all of the electronic signature compone nts (2) Be used only by their genuine owners; and (3) Be administered and executed to ensure that attempted use of an individual's electronic signature by anyone other than its genuine owner requires collaboration of two or more individuals You are responsible for establishing and verifying that your applications/systems meet the electronic signature components and controls identified including establishing the procedu res for use of identifying components and use by genuine owners (b) Electronic signatures based upon biometrics shall be designed to ensure that they cannot be used by anyone other than their genuine owners Not applicable to AWS – this requirement only applies to the customer’s applications You are responsible for establishing and verifying that your applications/systems meet the electronic signature components and controls identified including establishing the procedures for use by genuine owners Amazon Web Services GxP Systems on AWS 73 21 CFR Subpart AWS Responsibility Customer Responsibility §11300 Controls for identification codes/passwords Persons who use electronic signatures based upon use of identification codes in combination with passwords shall employ controls to ensure their security and integrity Such controls shall include: (a) Maintaining the uniqueness of each combined identification code and password such that no two individuals have the same combination of identification code and password Not applicable to AWS – this requirement only applies to the customer’s applicatio ns You are responsible for establishing and verifying that your applications/systems meet the electronic signature controls identified including establishing the procedures and controls for uniqueness of password and ID code combinations (b) Ensuring that identification code and password issuances are periodically checked recalled or revised (eg to cover such events as password aging) Not applicable to AWS – this requirement only applies to the customer’s applications You are responsible for es tablishing and verifying that your applications/systems meet the electronic signature controls identified including establishing the procedures and controls for periodic review of password issuance (c) Following loss management procedures to electronica lly deauthorize lost stolen missing or otherwise potentially compromised tokens cards and other devices that bear or generate identification code or password information and to issue temporary or permanent replacements using suitable rigorous contro ls Not applicable to AWS – this requirement only applies to the customer’s applications You are responsible for establishing and verifying that your applications/systems meet the electronic signature controls identified including establishing the proce dures and controls for loss management of compromised devices that generate ID code or passwords Amazon Web Services GxP Systems on AWS 74 21 CFR Subpart AWS Responsibility Customer Responsibility (d) Use of transaction safeguards to prevent unauthorized use of passwords and/or identification codes and to detect and report in an immediate and urgent manner any attempts at their unauthorized use to the system security unit and as appropriate to organizational management Not applicable to AWS – this requirement only applies to the customer’s applications You are responsible for establishing and verifying that your applications/systems meet the electronic signature controls identified including establishing the procedures and controls to prevent detect and report unauthorized use of ID codes and/or passwords (e) Initial and periodic testi ng of devices such as tokens or cards that bear or generate identification code or password information to ensure that they function properly and have not been altered in an unauthorized manner Not applicable to AWS – this requirement only applies to th e customer’s applications You are responsible for establishing and verifying that your applications/systems meet the electronic signature controls identified including establishing the procedures and controls to periodically test devices that generate I D codes or passwords for proper functionality [1] In computing JSON (JavaScript Object Notation) is the open standard syntax used for AWS CloudFormation templates https://awsamazonc om/documentation/cloudformation/ [2] https://wwwcontinuousvalidationcom/what iscontinuous validation/
General
Provisioning_Oracle_Wallets_and_Accessing_SSLTLSBased_Endpoints_on_Amazon_RDS_for_Oracle
Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle February 2018 Copyright 2018 Amazoncom Inc or its affiliates All Rights Reserved Notices Licensed under the Apache License Version 20 (the "License") You may not use this file except in compliance with the License A copy of the License is located at http://awsamazoncom/apache20/ or in the "license" file accompanying this file This file is distributed on an "AS IS" BASIS WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND either express or implied See the License for the specific language governing permissions and limitations under the License This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own in dependent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations c ontractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agre ement between AWS and its customers Contents Introduction 1 Creating and Uploading Custom Oracle Wallets 2 Creating and Uploading a Wallet with an Amazon S3 Certificate 3 Uploading a Customized Wallet Bundle 5 Examples of Using Oracle Wallets to Establish SSL/TLS Outbound Connections 6 Using UTL_HTTP over an SSL/TLS Endpoint 7 Establishing Database Links between RDS Oracle DB Instances over an SSL/TLS Endpoint 7 Sending Emails Using UTL_SMTP and Amazon Simple Email Service (Amazon SES) 7 Downloading a File fr om Amazon S3 to an RDS Oracle DB Instance 8 Uploading a File from RDS Oracle DB Instance to Amazon S3 8 Conclusion 9 Appendi x 9 Sample PL/SQL Procedure to Download Artifacts from Amazon S3 9 Sample PL/SQL Procedure to Send an Email Through Amazon SES 12 Abstract This paper explain s how to extend outbound network access on your Amazon Relational Database Service (Amazon RDS) for Oracle database instances to connect securely to remote SSL/TLS based endpoints SSL/TLS endpoints require one or more valid Certificate Authority (CA) certificates that can be bundled within an Oracle wallet By uploading Oracle wallets to your Amazon RDS for Oracle DB instances certain ou tbound network calls can be made aware of the uploaded Oracle wallets This enables outbound network traffic to access any SSL/TLS based endpoint that can be validated using the CA certificate bundle within the Oracle wallets Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 1 Introduction Amazon Relational Database Service (Amazon RDS ) is a managed relational database service that provides you with six familiar database engines to choose from including Amazon Aurora MySQL MariaDB Oracle Microsof t SQL Server and PostgreSQL1 You can use your existing database code applications and tools with Amazon RDS and RDS will handle routine database tasks such as provisioning patching backup recovery failure detection and repair With Amazon RDS you can use replication to enhance availability and reliability for production workloads Using the Multi AZ deployment option you can run mission critical workloads with high availability and built in automated failover from your primary database to a s ynchronously replicated secondary database Amazon RDS for Oracle provides scalability performance monitoring and backup and restore support Multi AZ deployment for Oracle DB instances simplifies creating a highly available architecture This is becaus e a Multi AZ deployment contains built in support for automated failover from your primary database to a synchronously replicated secondary database in a different Availability Zone Amazon RDS for Oracle provides the latest version of Oracle Database with the latest Patch Set Updates (PSUs) Amazon RDS manages the database upgrade process on your schedule eliminating manual database upgrade and patching tasks Amazon Virtual Private Cloud (Amazon VPC) is a virtu al network dedicated to your AWS account2 It is logically isolated from other virtual networks in the AWS Cloud You can launch AWS resources such as Amazon RDS DB instance s or Amazon Elastic Compute Cloud (Ama zon EC2) instance s into your VPC3 When you create a VPC you specify IP address ranges subnet s routing tables and network gateways to your own data center and to the internet You can move RDS DB instances that are not already in a VPC into an existing VPC4 Outbound network access is only supported fo r Oracle DB instances in a VPC 5 Using outbound network access you can use PL/SQL code inside the database to initiate connections to servers elsewhere on the network This lets you use utilities such as UTL_HTTP UTL_TCP and UTL_SMTP to connect your DB instance to remote endpoints For example you can use UTL_MAIL or Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 2 UTL_SMTP to send emails or UTL_HTTP to communicate with external web servers By default an Amazon DNS server provides name resolutions for outbound traffic from the instances in your VPC Should you choose to resolve private domain names for outbound traffic you can configure a custom DNS server 6 Always take care when enabling outbound networking as attackers can use it as a vector to remove data from your systems In addition to other security best practices keep the following in mind:  Carefully configure VPC security groups to only allow ingress from and egress to known netwo rks  Use in database network access control lists (ACLs) to allow only trusted users to initiate connections out of the database  Always upgrade to the latest release of Amazon RDS for Oracle to ensure you have the latest Oracle PSU and security fixes To protect the integrity and content of your data you should use Transport Layer Security (TLS also referred to as Secure Sockets Layer or SSL) to provide encryption and server verification By default outbound network access support s only external traffic over and to nonTLS/SSL mediums For TLS/SSL based traffic you can use Oracle wallets to store Certificate Authority (CA) certificates which enable the verification of remote entities You can make utilities that use outbound network access traffic (such as UTL_HTTP and UTL_SMTP ) aware of these wallets This enables outbound communication from your DB instance to remote endpoints over SSL In th is paper we discuss how to create Oracle wal lets and copy them to an Amazon RDS for Oracle DB instance using Amazon S3 We also demonstrate how to use a wallet to protect calls made using UTL_HTTP and UTL_SMTP utilities Creating and Uploading Custom Oracle Wallets To enable SSL/TLS connections from PL/SQL you can upload custom O racle wallet s to your Amazon RDS for Oracle DB instances These wallets can contain Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 3 public and private certificates to access SSL/TLS based endpoints from your RDS Oracle DB instances First you create an initial Oracle wal let containing an Amazon S3 certificate as a onetime setup Then you can securely upload any number of wallets to Amazon RDS for Oracle DB instances through Amazon S3 Creating and Upload ing a Wallet with an Amazon S3 Certificate 1 Download the Baltimore CyberTrust Root certificate7 2 Convert the certificate to the x509 PEM format openssl x509 inform der in BaltimoreCyberTrustRootcrt outform pem out BaltimoreCyberTrustRoot pem 3 Using the orapki utility 8 create a wallet and add the certificate This export s the wallet to a file named cwalletsso Alternatively if you don’t specify an auto login wallet you can use ewalletp12 In this case PL/SQL applications must provide a password when opening the wallet orapki wallet create wallet auto_login _only orapki wallet add wallet trusted_cert cert BaltimoreCyberTrustRoot pem auto_login_only orapki wallet display wallet 4 Using high level aws s3 commands with the AWS Command Line Interface ( CLI)9 create a n S3 bucket (or use an existing bucket) and upload the wallet artifact aws s3 mb s3:// <bucketname> aws s3 cp cwalletsso s3://<bucket name>/ 5 Generate a presigned URL for the wallet artifact By default presigned URLs are valid for an hour However you can set the expiration explicitly 10 Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 4 aws s3 presign s3://<bucketname>/cwalletsso 6 Import the procedure provided in the Appendix into your RDS for Oracle DB instance 7 Using this procedure download the wallet from the S3 bucket a Create a directory for this initial wallet (Be sure to always store each wallet in its own director y) exec rdsadminrdsadmin_utilcreate_directory('S3_SSL_WALLET'); b Whitelist outbound traffic on Oracle’s ACL (using the ‘user’ defined earlier ) BEGIN DBMS_NETWORK_ACL_ADMINCREATE_ACL ( acl => 's3xml' description => 'AWS S3 ACL' principal => UPPER('&user') is_grant => TRUE privilege => 'connect'); COMMIT; END; / BEGIN DBMS_NETWORK_ACL_ADMINASSIGN_ACL ( acl => 's3xml' host => '* amazonawscom '); COMMIT; END; / c Using the procedure above fetch the wallet artifact uploaded earlier to the S3 bucket Replace the p_s3_url value with the presigned URL generated in step 5 (after stripping it to be HTTP instead of HTTPS) Although access to t his S3 wallet artifact is presigned it must be over HTTP Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 5 set define #; BEGIN s3_download_presigned_url ( p_s3_url => ' <URL from step 5> ' p_local_filename => 'cwalletsso' p_local_directory => 'S3_SSL_WALLET' ); END; / 8 Set the S3_SSL_WALLET path above for utl_http transactions DECLARE l_wallet_path all_directoriesdirectory_path%type; BEGIN select directory_path into l_wallet_path from all_directories where upper(directory_name)=' S3_SSL_WALLET '; utl_httpset_wallet('file:/' || l_wallet_path ); END; / At this point you can use the wallet to acces s any artifact (not limited to Oracle wallets) from Amazon S3 over SSL/TLS as long as you’re pointing to the wallet directory specified above Upload ing a Customized Wallet Bundle With the capability we’ve described in the previous procedure you can also download customized Oracle wallets (containing customized selections of publ ic or private CA certificates) For example you can create a new Oracle wallet containing a wallet bundle of your choice upload it to an S3 bucket and use one of the previo us procedures to securely download this wallet to a n Amazon RDS for Oracle DB instance 1 Create a new directory (named MY_WALLET for example) for this new wallet bundle Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 6 exec rdsadminrdsadmin_utilcreate_directory(' MY_WALLET '); 2 Download the new wallet artifacts from the S3 bucket to the new directory Notice that we’ve passed on the S3_SSL_WALLET directory from the initial setup above to validate against the S3 bucket certific ate The download is requested over HTTPS BEGIN s3_download_ presigned_url ( '<S3 URL>' p_local_filename => 'cwalletsso' p_local_directory => 'MY_WALLET' p_wallet_directory => ' S3_SSL_WALLET ' ); END; / 3 Run this procedure to use this newly uploaded wallet ( for example with UTL_ HTTP ) DECLARE l_wallet_path all_directoriesdirectory_path%type; BEGIN select directory_path into l_wallet_path from all_directories where upper(directory_name)='MY _WALLET' ; utl_httpset_wallet('file:/' || l_wallet_path ); END; / Similarly you can upload and use any generic wallet where it’s need ed Examples of Using Oracle Wallets to Establish SSL/TLS Outbound Connections Oracle wallets containing CA certificate bundles allow SSL/TLS based outbound traffic to access any endpoint that can validate itself against o ne of the CA Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 7 certificate s in the bundle Here are a few examples of how you can use wallets to establish SSL/TLS outbound connections Using UTL_HTTP over a n SSL/TLS Endpoint Once you create a wallet accessing an endpoint over SSL/TLS requires setting the wallet path In this example robotstxt from statusawsamazoncom is accessed with an Oracle wallet containing Amazon’s CA certificate (obtained from https://wwwamazontrustcom/repository ) BEGIN utl_httpset_wallet('file:/rdsdbdata/userdirs/02'); END; / select utl_httprequest('https://statusawsamazoncom/robotstxt') as ROBOTS_TXT from dual; ROBOTS_TXT Useragent: * Allow: / Establishing Database Links between RDS Oracle DB Instances over an SSL/TLS Endpoint Database links can be established between RDS Oracle DB instances over an SSL/TLS endpoint as long as the SSL option is configured for each instance 11 No further setup is required Sending Emails Using UTL_SMTP and Amazon Simple Email Service (Amazon SES) You can use Amazon SES to send emails on UTL_SMTP over SSL/TLS 1 Obtain the relevant AWS Region endpoint and credentials from Amazon SES 12 Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 8 2 Obtain a Verisign Symantec based CA certificates13 3 Create or update an existing wallet containing the relevant certificate For this example assume that the wallet has been uploaded to a directory called SES_SSL_WALLET created through the RDSADMIN utility Using your Amazon SES SMTP credentials send an email through UTL_SMTP u sing this sample code snippet Downloading a File from Amazon S3 to an RDS Oracle DB Instance Using a utility similar to the s3_download_presigned_url procedure you can download files from Amazon S3 For e xample: BEGIN s3_download_presigned_url ( 'https:// <bucketname>s3amazonawscom/ <sub directory> /<file>?AWSAccessKeyId=' p_local_filename => ' <localfilename> ' p_local_directory => ' <targetlocaldirectory> ' p_wallet_directory => 'S3_SSL_ WALLET' ); END; / Uploading a File from RDS Oracle DB Instance to Amazon S3 Uploading an artifact from your database instance to Amazon S3 is possible through HTTP PUT multipart requests using AWS Signature Version 4 signing 14 Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 9 Conclusion In this paper we explained how to create Oracle wallets containing CA certificate bundles and copy them to Amazon RDS for Oracle DB instances We also provided a few examples that show ed how you can use wallets to establish SSL/TLS based outbound connections You can ex tend t he steps highlighted in this paper to access any secure endpoint fro m your Amazon RDS Oracle DB instances Appendix Sample PL/SQL Procedure to Download Artifacts from Amazon S3 Define your user here define user='admin'; Directgrant required privs BEGIN rdsadminrdsadmin_utilgrant_sys_object('DBA_DIRECTORIES' UPPER('&user')); END; / BEGIN rdsadminrdsadmin_utilgrant_sys_object('UTL_HTTP' UPPER('&user')); END; / BEGIN rdsadminrdsadmin_utilgrant_sys_object('UTL_FILE' UPPER('&user')); END; Example download procedure CREATE OR REPLACE PROCEDURE s3_download_presigned_url ( p_s3_url IN VARCHAR2 Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 10 p_local_filename IN VARCHAR2 p_local_directory IN VARCHAR2 p_wallet_directory IN VARCHAR2 DEFAULT NULL ) AS Local variables l_req utl_httpreq; l_wallet_path VARCHAR2(4000); l_fh utl_filefile_type; l_resp utl_httpresp; l_data raw(32767); l_file_size NUMBER; l_file_exists BOOLEAN; l_block_s ize BINARY_INTEGER; l_http_status NUMBER; Userdefined exceptions e_https_requires_wallet EXCEPTION; e_wallet_dir_invalid EXCEPTION; e_http_exception EXCEPTION; BEGIN Validate input IF (regexp_like(p_s3_url '^https:' 'i') AND p_wallet_directory IS NULL) THEN raise e_https_requires_wallet; END IF; Use wallet if specified IF (p_wallet_directory IS NOT NULL) THEN BEGIN SELECT directory_path INTO l_wallet_path FROM dba_directories WHERE upper(directory_name)=upper(p_wallet_directory); utl_httpset_wallet('file:' || l_wallet_path); EXCEPTION WHEN NO_DATA_FOUND THEN raise e_wallet_dir_invalid; END; END IF; Do HTTP request BEGIN Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 11 l_req := utl_httpbegin_request(p_s3_url 'GET' 'HTTP/11'); l_fh := utl_filefopen(p_local_directory p_local_filename 'wb' 32767); l_resp := utl_httpget_response(l_req); If we get HTTP error code write that instead l_http_s tatus := l_respstatus_code; IF (l_http_status != 200) THEN dbms_outputput_line('WARNING: HTTP response ' || l_http_status || ' ' || l_respreason_phrase || ' Details in ' || p_local_filename ); END IF; Loop over response and write to file BEGIN LOOP utl_httpread_raw(l_resp l_data 32766); utl_fileput_raw(l_fh l_data true); END LOOP; EXCEPTION WHEN utl_httpend_of_body THEN utl_httpend_respon se(l_resp); END; Get file attributes to see what we did utl_filefgetattr( location => p_local_directory filename => p_local_filename fexists => l_file_exists file_length => l_file_size block_size => l_block_size ); utl_filefclose(l_fh); dbms_outputput_line('wrote ' || l_file_size || ' bytes'); EXCEPTION WHEN OTHERS THEN utl_httpend_response(l_resp); utl_filefclose(l_fh); dbms_outputput_line(dbms_utilityform at_error_stack()); Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 12 dbms_outputput_line(dbms_utilityformat_error_backtrace()); raise; END; EXCEPTION WHEN e_https_requires_wallet THEN dbms_outputput_line('ERROR: HTTPS requires a valid wallet location'); WHEN e_wallet_dir_invalid THEN dbms_outputput_line('ERROR: wallet directory not found'); WHEN others THEN raise; END s3_download_presigned_url; / Sample PL/SQL Procedure to Send an Email Through Amazon SES declare l_smtp_server va rchar2(1024) := 'email smtpuswest 2amazonawscom'; l_smtp_port number := 587; l_wallet_dir varchar2(128) := 'SES_SSL_WALLET'; l_from varchar2(128) := 'user@lorem ipsumdolar'; l_to varchar2(128) := 'user@lorem ipsumdolar'; l_user varchar2(12 8) := '<USERNAME>'; l_password varchar2(128) := '<PASSWORD>'; l_subject varchar2(128) := 'Test subject'; l_wallet_path varchar2(4000); l_conn utl_smtpconnection; l_reply utl_smtpreply; l_replies utl_smtpreplies; begin select 'file:/' || directory_path into l_wallet_path from dba_directories where directory_name=l_wallet_dir; Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 13 open a connection l_reply := utl_smtpopen_connection( host => l_smtp_server port => l_smtp_port c => l_conn wallet_path => l_wallet_path secure_connection_before_smtp => false ); dbms_outputput_line('opened connection received reply ' || l_replycode || '/' || l_replytext); get supported configs from server l_replies := utl_smtpehlo(l_conn 'localhost'); for r in 1l_repliescount loop dbms_outputput_line('ehlo (server config) : ' || l_replies(r)code || '/' || l_replies(r)text); end loop; STARTTLS l_reply := utl_smtpstarttls(l_conn); dbms_outputput_line('starttls received reply ' || l_replycode || '/' || l_replytext); l_replies := utl_smtpehlo(l_conn 'localhost'); for r in 1l_repliescount loop dbms_outputput_line('ehlo (server config) : ' || l_replies(r)c ode || '/' || l_replies(r)text); end loop; utl_smtpauth(l_conn l_user l_password utl_smtpall_schemes); utl_smtpmail(l_conn l_from); utl_smtprcpt(l_conn l_to); utl_smtpopen_data l_conn); utl_smtpwrite_data(l_conn 'Date: ' || to_char(SYSDATE 'DD MONYYYY HH24:MI:SS') || utl_tcpcrlf); utl_smtpwrite_data(l_conn 'From: ' || l_from || utl_tcpcrlf); utl_smtpwrite_data(l_conn 'To: ' || l_to || utl_tcpcrlf); utl_smtpwrite_data(l _conn 'Subject: ' || l_subject || utl_tcpcrlf); Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 14 utl_smtpwrite_data(l_conn '' || utl_tcpcrlf); utl_smtpwrite_data(l_conn ' Test message ' || utl_tcpcrlf); utl_smtpclose_data(l_conn); l_reply := utl_smtpquit(l_conn); exception when oth ers then utl_smtpquit(l_conn); raise; end; / 1 https://awsamazoncom/rds/ 2 https://awsamazoncom/vpc/ 3 https://awsamazon com/ec2/ 4 http://docsawsamazoncom/AmazonRDS/latest/UserGuide/USER_VPCWo rkingWithRDSInstanceinaVPChtml#USER_VP CNon VPC2VPC 5 http://docsawsamazoncom/AmazonRDS/latest/UserGuide/CHAP_Oracleh tml#OracleConceptsONA 6 http://docsawsamazoncom/AmazonRDS/latest/UserGuide/AppendixOracl eCommonDBATasksSystemhtml#Ap pendixOracleCommonDBATasksCust omDNS 7 https://wwwdigicertcom/digicert root certificateshtm 8 https://docsoraclecom/database/121/DBSEG/asoappfhtm#DBSEG610 9 http://docsawsamazoncom/cli/latest/userguide/using s3commandshtml 10 http://docsawsamazoncom/cli/latest/reference/s3/presignhtml Notes Amazon Web Services – Provisioning Oracle Wallets and Accessing SSL/TLS Based Endpoints on Amazon RDS for Oracle Page 15 11 https://docsawsamazoncom/Ama zonRDS/latest/UserGuide/AppendixOrac leOptionsSSLhtml 12 https://docsawsamazoncom/ses/latest/DeveloperGuide/send email smtphtml 13https://wwwsymanteccom/theme/roots 14 https://docsawsamazoncom/AmazonS3/latest/API/sigv4 authenticatio n HTTPPOSThtml
General
AWS_Security_Best_Practices
ArchivedAWS Security Best Practices August 2016 This paper has been archived For the latest technical content on Security and Compliance see https://awsamazoncom/architecture/ securityidentitycompliance/ArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Introduction 1 Know the AWS Shared Responsibility Model 2 Understanding the AWS Secure Global Infrastructure 3 Sharing Security Responsibility for AWS Services 4 Using the Trusted Advisor Tool 10 Define and Categorize Assets on AWS 10 Design Your ISMS to Protect Your Assets on AWS 11 Manage AWS Accounts IAM Users Groups and Roles 13 Strategies for Using Multiple AWS Ac counts 14 Managing IAM Users 15 Managing IAM Groups 15 Managing AWS Credentials 16 Understa nding Delegation Using IAM Roles and Temporary Security Credentials 17 Managing OS level Access to Amazon EC2 Instances 20 Secure Your Data 22 Resource Access Authorization 22 Storing and Managing Encryption Keys in the Cloud 23 Protecting Data at Rest 24 Decommission Data and Media Securely 31 Protect Data in Transit 32 Secure Your Operating Systems and Applications 38 Creating Custom AMIs 39 Bootstrapping 41 Managing Patches 42 Controlling Security for Public AMIs 42 Protecting Your System from Malware 42 ArchivedMitigating Compromise and Abuse 45 Using Additional Application Security Practices 48 Secure Your Infrastructure 49 Using Amazon Virtual Private Cloud (VPC) 49 Using Security Zoning and Network Segmentation 51 Strengthening Network Security 54 Securing Periphery Systems: User Repositories DNS NTP 55 Building Threat Protection Layers 57 Test Security 60 Managing Metrics and Improvement 61 Mitigating and Protecting Against DoS & DDoS Attacks 62 Manage Security Monitoring Alerting Audit Trail and Incident Response 65 Using Change Management Logs 68 Managing Logs for Critical Transactions 68 Protecting Log Information 69 Logging Faults 70 Conclusion 70 Contributors 70 Further Reading 70 Document Revisions 71 ArchivedAbstract This whitepaper is intended f or existing and potential customers who are designing the security infrastructure and configuration for applications running in Amazon Web Services (AWS) It provides security best practices that will help you define your Information Security Management Sy stem (ISMS) and build a set of security policies and processes for your organization so you can protect your data and assets in the AWS Cloud The whitepaper also provides an overview of different security topics such as identifying categorizing and prote cting your assets on AWS managing access to AWS resources using accounts users and groups and suggesting ways you can secure your data your operating systems and applications and overall infrastructure in the cloud The paper is targeted at IT decision makers and security personnel and assumes that you are familiar with basic security concepts in the area of networking operating systems data encryption and operational controls ArchivedAmazon Web Services AWS Security Be st Practices Page 1 Introduction Information security is of paramount importance to Amazon Web Services (AWS) customers Security is a core functional requirement that protects mission critical information from accidental or deliberate theft leakage integrity compromise and deletion Under the AWS shared respon sibility model AWS provides a global secure infrastructure and foundation compute storage networking and database services as well as higher level services AWS provides a range of security services and features that AWS customers can use to secure the ir assets AWS customers are responsible for protecting the confidentiality integrity and availability of their data in the cloud and for meeting specific business requirements for information protection For more information on AWS’s security features please read Overview of Security Processes Whitepaper This whitepaper describes best practices that you can leverage to build and define an Information Security Management System (ISMS) that is a collection of information security policies and processes for your organization’s assets on AWS For more inform ation about ISMSs see ISO 27001 at https://wwwisoorg/standard/54534html Although it is not required to build an ISMS to use AWS we think that the structured approach for managing information sec urity that is built on basic building blocks of a widely adopted global security approach will help you improve your organization’s overall security posture We address the following topics: • How security responsibilities are shared between AWS and you the customer • How to define and categorize your assets • How to manage user access to your data using privileged accounts and groups • Best practices for securing your data operating systems and network • How monitoring and alerting can help you achieve your secur ity objectives This whitepaper discusses security best practices in these areas at a high level (It does not provide “how to” configuration guidance For service specific configuration guidance see the AWS Security Documentation ) ArchivedAmazon Web Services AWS Security Best Practices Page 2 Know the AWS Shared Responsibility Model Amazon Web Services provides a secure global infrastructure and services in the cloud You can build your systems using AWS as the foundation and architect an ISMS that takes advantag e of AWS features To design an ISMS in AWS you must first be familiar with the AWS shared responsibility model which requires AWS and customers to work together towards security objectives AWS provides secure infrastructure and services while you the customer are responsible for secure operating systems platforms and data To ensure a secure global infrastructure AWS configures infrastructure components and provides services and features you can use to enhance security such as the Identity and Ac cess Management (IAM) service which you can use to manage users and user permissions in a subset of AWS services To ensure secure services AWS offers shared responsibility models for each of the different type of service that we offer : • Infrastructure se rvices • Container services • Abstracted services The shared responsibility model for infrastructure services such as Amazon Elastic Compute Cloud (Amazon EC2) for example specifies that AWS manages the security of the following assets: • Facilities • Physical s ecurity of hardware • Network infrastructure • Virtualization infrastructure Consider AWS the owner of these assets for the purposes of your ISMS asset definition Leverage these AWS controls and include them in your ISMS In this Amazon EC2 example you as th e customer are responsible for the security of the following assets: • Amazon Machine Images (AMIs) • Operating systems ArchivedAmazon Web Services AWS Security Best Practices Page 3 • Applications • Data in transit • Data at rest • Data stores • Credentials • Policies and configuration Specific services further delineate how responsibilities are shared between you and AWS For more information see https://awsamazoncom/compliance/shared responsibility model/ Underst anding the AWS Secure Global Infrastructure The AWS secure global infrastructure and services are managed by AWS and provide a trustworthy foundation for enterprise systems and individual applications AWS establishes high standards for information securit y within the cloud and has a comprehensive and holistic set of control objectives ranging from physical security through software acquisition and development to employee lifecycle management and security organization The AWS secure global infrastructure and services are subject to regular third party compliance audits See the Amazon Web Services Risk and Compliance whitepaper for more information Using the IAM Service The IAM service is one component of the AWS secure global infrastructure that we discuss in this paper With IAM you can centrally manage users security credentials such as passwords access keys and permissions policies that contr ol which AWS services and resources users can access When you sign up for AWS you create an AWS account for which you have a user name (your email address) and a password The user name and password let you log into the AWS Management Console where you can use a browser based interface to manage AWS resources You can also create access keys (which consist of an access key ID and secret access key) to use when you make programmatic calls to AWS using the command line interface (CLI) the AWS SDKs or A PI calls IAM lets you create individual users within your AWS account and give them each their own user name password and access keys Individual users can then log into the ArchivedAmazon Web Services AWS Security Best Practices Page 4 console using a URL that’s specific to your account You can also create access keys for individual users so that they can make programmatic calls to access AWS resources All charges for activities performed by your IAM users are billed to your AWS account As a best practic e we recommend that you create an IAM user even for yourself and that you do not use your AWS account credentials for everyday access to AWS See Security Best Practices in IAM for more information Regions Availability Zones and Endpoints You should also be familiar with regions Availability Zones and endpoints which are components of the AWS secure global infrastructure Use AWS regions to manage network latency and regulatory compliance When you store data in a specific region it is not replicated outside that region It is your responsibility to replicate data across regions if your business needs require that AWS provides information about the country and wh ere applicable the state where each region resides; you are responsible for selecting the region to store data with your compliance and network latency requirements in mind Regions are designed with availability in mind and consist of at least two often more Availability Zones Availability Zones are designed for fault isolation They are connected to multiple Internet Service Providers (ISPs) and different power grids They are interconnected using high speed links so applications can rely on Local Ar ea Network (LAN) connectivity for communication between Availability Zones within the same region You are responsible for carefully selecting the Availability Zones where your systems will reside Systems can span multiple Availability Zones and we recom mend that you design your systems to survive temporary or prolonged failure of an Availability Zone in the case of a disaster AWS provides web access to services through the AWS Management Console availab le at and then through individual consoles for each service AWS provides programmatic access to services through Application Programming Interfaces (APIs) and command line interfaces (CLIs) Service endpoints which are managed by AWS provide management (“backplane”) access Sharing Security Responsibility for AWS Services AWS offers a variety of different infrastructure and platform services For the purpose of understanding security and shared responsibility of these AWS services let’s categorize them in three main categories: infrastructure container and abstracted services Each ArchivedAmazon Web Services AWS Security Best Practices Page 5 category comes with a slightly different security ownership model based on how you interact and access the functionality • Infrastructure Services: This category includes comp ute services such as Amazon EC2 and related services such as Amazon Elastic Block Store (Amazon EBS) Auto Scaling and Amazon Virtual Private Cloud (Amazon VPC) With these services you can architect and build a cloud infrastructure using technologies similar to and largely compatible with on premises solutions You control the operating system and you configure and operate any identity management system that provides access to the user layer of the virtualization stack • Container Services: Services i n this category typically run on separate Amazon EC2 or other infrastructure instances but sometimes you don’t manage the operating system or the platform layer AWS provides a managed service for these application “containers” You are responsible for se tting up and managing network controls such as firewall rules and for managing platform level identity and access management separately from IAM Examples of container services include Amazon Relational Database Services (Amazon RDS) Amazon Elastic Map Reduce (Amazon EMR) and AWS Elastic Beanstalk • Abstracted Services: This category includes high level storage database and messaging services such as Amazon Simple Storage Service (Amazon S3) Amazon Glacier Amazon DynamoDB Amazon Simple Queuing Servic e (Amazon SQS) and Amazon Simple Email Service (Amazon SES) These services abstract the platform or management layer on which you can build and operate cloud applications You access the endpoints of these abstracted services using AWS APIs and AWS mana ges the underlying service components or the operating system on which they reside You share the underlying infrastructure and abstracted services provide a multi tenant platform which isolates your data in a secure fashion and provides for powerful int egration with IAM Let’s dig a little deeper into the shared responsibility model for each service type Shared Responsibility Model for Infrastructure Services Infrastructure services such as Amazon EC2 Amazon EBS and Amazon VPC run on top of the AWS global infrastructure They vary in terms of availability and durability objectives but always operate within the specific region where they have been launched You can build systems that meet availability objectives exceeding those of ArchivedAmazon Web Services AWS Security Best Practices Page 6 individual services from AWS by employing resilient components in multiple Availability Zones Figure 1 depicts the building blocks for the shared responsibility model for infrastructure services Figure 1: Shared Responsibility Model for Infrastruc ture Services Building on the AWS secure global infrastructure you install and configure your operating systems and platforms in the AWS cloud just as you would do on premises in your own data centers Then you install your applications on your platform Ultimately your data resides in and is managed by your own applications Unless you have more stringent business or compliance requirements you don’t need to introduce additional layers of protection beyond those provided by the AWS secure glob al infrastructure For certain compliance requirements you might require an additional layer of protection between the services from AWS and your operating systems and platforms where your applications and data reside You can impose additional controls such as protection of data at rest and protection of data in transit or introduce a layer of opacity between services from AWS and your platform The opacity layer can include data encryption data integrity authentication software and data signing s ecure time stamping and more AWS provides technologies you can implement to protect data at rest and in transit See the Managing OS level Access to Amazon EC2 Instances and Secure Your Data sections in this whitepaper for more information Alternatively you might introduce your own data protection tools or leverage AWS partner offerings The previous section describes the ways in which you can manage access to resources that require authentication to AWS services However in order to access the operating ArchivedAmazon Web Services AWS Security Best Practices Page 7 system on your EC2 instances you need a different set of credentials In the shared responsibility model you own the operating system credentials but AWS helps you bootstrap the initial access to the operating system When you launch a new Amazon EC2 instance from a standard AMI you can access that instance using secure remote system access protocols such as Secure Shell (SSH) or Windows Remote Desktop Protocol (R DP) You must successfully authenticate at the operating system level before you can access and configure the Amazon EC2 instance to your requirements After you have authenticated and have remote access into the Amazon EC2 instance you can set up the ope rating system authentication mechanisms you want which might include X509 certificate authentication Microsoft Active Directory or local operating system accounts To enable authentication to the EC2 instance AWS provides asymmetric key pairs known a s Amazon EC2 key pairs These are industry standard RSA key pairs Each user can have multiple Amazon EC2 key pairs and can launch new instances using different key pairs EC2 key pairs are not related to the AWS account or IAM user credentials discussed previously Those credentials control access to other AWS services; EC2 key pairs control access only to your specific instance You can choose to generate your own Amazon EC2 key pairs using industry standard tools like OpenSSL You generate the key pair in a secure and trusted environment and only the public key of the key pair is imported in AWS; you store the private key securely We advise using a high quality random number generator if you take this path You can choose to have Amazon EC2 key pairs generated by AWS In this case both the private and public key of the RSA key pair are presented to you when you first create the instance You must download and securely store the private key of the Amazon EC2 key pair AWS does not store the private key ; if it is lost you must generate a new key pair For Amazon EC2 Linux instances using the cloud init service when a new instance from a standard AWS AMI is launched the public key of the Amazon EC2 key pair is appended to the initial operating system us er’s ~/ssh/authorized_keys file That user can then use an SSH client to connect to the Amazon EC2 Linux instance by configuring the client to use the correct Amazon EC2 instance user’s name as its identity (for example ec2 user) and providing the priva te key file for user authentication ArchivedAmazon Web Services AWS Security Best Practices Page 8 For Amazon EC2 Windows instances using the ec2config service when a new instance from a standard AWS AMI is launched the ec2config service sets a new random Administrator password for the instance and encrypts it usin g the corresponding Amazon EC2 key pair’s public key The user can get the Windows instance password by using the AWS Management Console or command line tools and by providing the corresponding Amazon EC2 private key to decrypt the password This password along with the default Administrative account for the Amazon EC2 instance can be used to authenticate to the Windows instance AWS provides a set of flexible and practical tools for managing Amazon EC2 keys and providing industry standard authentication into newly launched Amazon EC2 instances If you have higher security requirements you can implement alternative authentication mechanisms including LDAP or Active Directory authentication and disable Amazon EC2 key pair authentication Shared Responsi bility Model for Container Services The AWS shared responsibility model also applies to container services such as Amazon RDS and Amazon EMR For these services AWS manages the underlying infrastructure and foundation services the operating system and t he application platform For example Amazon RDS for Oracle is a managed database service in which AWS manages all the layers of the container up to and including the Oracle database platform For services such as Amazon RDS the AWS platform provides dat a backup and recovery tools; but it is your responsibility to configure and use tools in relation to your business continuity and disaster recovery (BC/DR) policy For AWS Container services you are responsible for the data and for firewall rules for acce ss to the container service For example Amazon RDS provides RDS security groups and Amazon EMR allows you to manage firewall rules through Amazon EC2 security groups for Amazon EMR instances Figure 2 depicts the shared responsibility model for containe r services ArchivedAmazon Web Services AWS Security Best Practices Page 9 Figure 2: Shared Responsibility Model for Container Services Shared Responsibility Model for Abstracted Services For abstracted services such as Amazon S3 and Amazon DynamoDB AWS operates the infrastructure layer t he operating system and platforms and you access the endpoints to store and retrieve data Amazon S3 and DynamoDB are tightly integrated with IAM You are responsible for managing your data (including classifying your assets) and for using IAM tools to a pply ACL type permissions to individual resources at the platform level or permissions based on user identity or user responsibility at the IAM user/group level For some services such as Amazon S3 you can also use platform provided encryption of data a t rest or platform provided HTTPS encapsulation for your payloads for protecting your data in transit to and from the service Figure 3 outlines the shared responsibility model for AWS abstracted services: Figure 3: Shared Respo nsibility Model for Abstracted Services ArchivedAmazon Web Services AWS Security Best Practices Page 10 Using the Trusted Advisor Tool Some AWS Premium Support plans include access to the Trusted Advisor tool which offers a one view snap shot of your service and helps identify common security misconfigurations suggestions for improving system performance and underutilized resources In this whitepaper we cover the security aspects of Trusted Advisor that apply to Amazon EC2 Trusted Advisor checks for compliance with the following security recommendations: • Limited access to common administrative ports to only a small subset of addresses This includes ports 22 (SSH) 23 (Telnet) 3389 (RDP) and 5500 (VNC) • Limited access to co mmon database ports This includes ports 1433 (MSSQL Server) 1434 (MSSQL Monitor) 3306 (MySQL) Oracle (1521) and 5432 (PostgreSQL) • IAM is configured to help ensure secure access control of AWS resources • Multi factor authentication (MFA) token is enabl ed to provide two factor authentication for the root AWS account Define and Categorize Assets on AWS Before you design your ISMS identify all the information assets that you need to protect and then devise a technically and financially viable solution fo r protecting them It can be difficult to quantify every asset in financial terms so you might find that using qualitative metrics (such as negligible/low/medium/high/very high) is a better option Assets fall into two categories: • Essential elements such as business information process and activities • Components that support the essential elements such as hardware software personnel sites and partner organizations Table 1 shows a sample matrix of assets ArchivedAmazon Web Services AWS Security Best Practices Page 11 Table 1: Sample asset matrix Asset Name Asset Owner Asset Category Dependencies Customer facing website applications ECommerce team Essential EC2 Elastic Load Balancing Amazon RDS development Customer credit card data ECommerce team Essential PCI card holder environment encryption AWS PCI service Personnel data COO Essential Amazon RDS encryption provider dev and ops IT third party Data archive COO Essential S3 S3 Glacier dev and ops IT HR management system HR Essential EC2 S3 RDS dev and ops IT third party AWS Direct Connect infrastructure CIO Network Network ops TelCo provider AWS Direct Connect Business intelligence platform BI team Software EMR Redshift DynamoDB S3 dev and op s Business intelligence services COO Essential BI infrastructure BI analysis teams LDAP directory IT Security team Security EC2 IAM custom software dev and ops Windows AMI Server team Software EC2 patch management software dev and ops Customer credentials Compliance team Security Daily updates; archival infrastructure Design Your ISMS to Protect Your Assets on AWS After you have determined assets categories and costs establish a standard for implementing operating monitoring reviewing maintaining and improving your information security management system (ISMS) on AWS Security requirements differ in every organization depending on the following factors: ArchivedAmazon Web Services AWS Security Best Practices Page 12 • Business needs and objectives • Processes employed • Size and s tructure of the organization All these factors can change over time so it is a good practice to build a cyclical process for managing all of this information Table 2 suggests a phased approach to designing and building an ISMS in AWS You might also find standard frameworks such as ISO 27001 helpful with ISMS design and implementation Table 2: Phases of building an ISMS Phase Title Description 1 Define scope and boundaries Define which regions Availability Zones instances and AWS resources are “in scope” If you exclude any component (for example AWS manages facilities so you can leave it out of your own management system) state what you have excluded and why explicitly 2 Define an ISMS policy Include the following: • Objectives that set the direction and principles for action regarding information security • Legal contractual and regulatory requirements • Risk management objectives for your organization • How you will measure risk • How management approves the pla n 3 Select a risk assessment methodology Select a risk assessment methodology based on input from groups in your organization about the following factors: • Business needs • Information security requirements • Information technology capabilities and use • Legal requirements • Regulatory responsibilities Because public cloud infrastructure operates differently from legacy environments it is critical to set criteria for accepting risks and identifying the acceptable levels of risk (risk tolerances) We recomme nded starting with a risk assessment and leveraging automation as much as possible AWS risk automation can narrow down the scope of resources required for risk management There are several risk assessment methodologies including OCTAVE (Operationally Cr itical Threat Asset and Vulnerability Evaluation) ISO 31000:2009 Risk Management ENISA (European Network and Information Security Agency IRAM (Information Risk Analysis Methodology) and NIST (National Institute of Standards & Technology) Special Publ ication (SP) 800 30 rev1 Risk Management Guide ArchivedAmazon Web Services AWS Security Best Practices Page 13 Phase Title Description 4 Identify risks We recommend that you create a risk register by mapping all your assets to threats and then based on the vulnerability assessment and impact analysis results creating a new risk matrix for each AWS environment Here’s an example risk register: • Assets • Threats to those assets • Vulnerabilities that could be exploited by those threats • Consequences if those vulnerabilities are exploited 5 Analyze and evaluate risks Analyze and evaluate the risk by calculating business impact likelihood and probability and risk levels 6 Address risks Select options for addressing risks Options include applying security controls accepting risks avoiding risk or transferring risks 7 Choose a security control framework When you choose your security controls use a framework such as ISO 27002 NIST SP 800 53 COBIT (Control Objectives for Information and related Technology) and CSA CCM (Cloud Security Alliance Cloud Control Matrix The se frameworks comprise a set of reusable best practices and will help you to choose relevant controls 8 Get management approval Even after you have implemented all controls there will be residual risk We recommend that you get approval from your busine ss management that acknowledges all residual risks and approvals for implementing and operating the ISMS 9 Statement of applicability Create a statement of applicability that includes the following information: • Which controls you chose and why • Which controls are in place • Which controls you plan to put in place • Which controls you excluded and why Manage AWS Accounts IAM Users Groups and Roles Ensuring that users have appropriate levels of permissions to access the resources they need but no more than that is an important part of every ISMS You can use IAM to help perform this function You create IAM users under your AWS account and then assign them permissions directly or assign them to groups to which you assign permissions Here's a little more detail about AWS accounts and IAM users: ArchivedAmazon Web Services AWS Security Best Practices Page 14 • AWS account This is the account that you create when you first sign up for AWS Your AWS account represe nts a business relationship between you and AWS You use your AWS account to manage your AWS resources and services AWS accounts have root permissions to all AWS resources and services so they are very powerful Do not use root account credentials for da ytoday interactions with AWS In some cases your organization might choose to use several AWS accounts one for each major department for example and then create IAM users within each of the AWS accounts for the appropriate people and resources • IAM u sers With IAM you can create multiple users each with individual security credentials all controlled under a single AWS account IAM users can be a person service or application that needs access to your AWS resources through the management console C LI or directly via APIs Best practice is to create individual IAM users for each individual that needs to access services and resources in your AWS account You can create fine grained permissions to resources under your AWS account apply them to group s you create and then assign users to those groups This best practice helps ensure users have least privilege to accomplish tasks Strategies for Using Multiple AWS Accounts Design your AWS account strategy to maximize security and follow your business a nd governance requirements Table 3 discusses possible strategies Table 3: AWS Account strategies Business Requirement Proposed Design Comments Centralized security management Single AWS account Centralize information security management and minimize overhead Separation of production development and testing environments Three AWS accounts Create one AWS account for production services one for development and one for testing Multiple autonomous departments Multiple AWS accounts Create separate AWS accounts for each autonomous part of the organization You can assign permissions and policies under each account ArchivedAmazon Web Services AWS Security Best Practices Page 15 Business Requirement Proposed Design Comments Centralized security management with multiple autonomous independent projects Multiple AWS accounts Create a single AWS account for common project resources (such as DNS services Active Directory CMS etc)Then create separate AWS accounts per project You can assign permissions and policies under each project account and grant ac cess to resources across accounts You can configure a consolidated billing relationship across multiple accounts to ease the complexity of managing a different bill for each account and leverage economies of scale When you use billing consolidation th e resources and credentials are not shared between accounts Managing IAM Users IAM users with the appropriate level of permissions can create new IAM users or manage and delete existing ones This highly privileged IAM user can create a distinct IAM user for each individual service or application within your organization that manages AWS configuration or accesses AWS resources directly We strongly discourage the use of shared user identities where multiple entities share the same credentials Managing IAM Groups IAM groups are collections of IAM users in one AWS account You can create IAM groups on a functional organizational or geographic basis or by project or on any other basis where IAM users need to access similar AWS resources to do their jo bs You can provide each IAM group with permissions to access AWS resources by assigning one or more IAM policies All policies assigned to an IAM group are inherited by the IAM users who are members of the group For example let’s assume that IAM user Jo hn is responsible for backups within an organization and needs to access objects in the Amazon S3 bucket called Archives You can give John permissions directly so he can access the Archives bucket But then your organization places Sally and Betty on the same team as John While you can assign user permissions individually to John Sally and Betty to give them access to the Archives bucket assigning the permissions to a group and placing John Sally and Betty in that group will be easier to manage and maintain If additional users require the same access you can give it to them by adding them to the group When a user no ArchivedAmazon Web Services AWS Security Best Practices Page 16 longer needs access to a resource you can remove them from the groups that provide access to that resource IAM groups are a powerfu l tool for managing access to AWS resources Even if you only have one user who requires access to a specific resource as a best practice you should identify or create a new AWS group for that access and provision user access via group membership as we ll as permissions and policies assigned at the group level Managing AWS Credentials Each AWS account or IAM user is a unique identity and has unique long term credentials There are two primary types of credentials associated with these identities: (1) th ose used for sign in to the AWS Management Console and AWS portal pages and (2) those used for programmatic access to the AWS APIs Table 4 describes two types of sign in credentials Table 4: Sign in credentials Sign In Credential Type Details Username/Password User names for AWS accounts are always email addresses IAM user names allow for more flexibility Your AWS account password can be anything you define IAM user passwords can be forced to comply with a policy you define (that is you can require minimum password length or the use of non alphanumeric characters) Multi factor authentication (MFA) AWS Multi factor authentication (MFA) provides an extra level of security for sign in credentials With MFA enabled when users signs in to an AWS website they will be prompted for their user name and password (the first factor –what they know) as well as for an authentication code from their MFA device (the second factor – what they have) You can also require MFA for users to delete S3 objects We recommend you activate MFA for your AWS account and your IAM users to prevent unauthorized access to your AWS environment Currently AWS supports Gemalto hardware MFA devices as well as virtual MFA devices in the form of smartphone applications Table 5 describes types of credentials used for programmatic access to APIs ArchivedAmazon Web Services AWS Security Best Practices Page 17 Table 5: API access credentials Access Credential Type Details Access keys Access keys are used to digitally sign API calls made to AWS services Each access key credential is comprised of an access key ID and a secret key The secret key portion must be secured by the AWS account holder or the IAM user to whom they are assigned Users can have two sets of active access k eys at any one time As a best practice users should rotate their access keys on a regular basis MFA for API calls Multi factor authentication (MFA) protected API access requires IAM users to enter a valid MFA code before they can use certain functions which are APIs Policies you create in IAM will determine which APIs require MFA Because the AWS Management Console calls AWS service APIs you can enforce MFA on APIs whether access is through the console or via APIs Understanding Delegation Using IAM Roles and Temporary Security Credentials There are scenarios in which you want to delegate access to users or services that don't normally have access to your AWS resources Table 6 below outlines common use cases for delegating such a ccess Table 6: Common delegation use cases Use Case Description Applications running on Amazon EC2 instances that need to access AWS resources Applications that run on an Amazon EC2 instance and that need access to AWS resources such as Amazon S3 buckets or an Amazon DynamoDB table must have security credentials in order to make programmatic requests to AWS Developers might distribute their credentials to each instance and applications can then use those credentials to access resources but distributing longterm credentials to each instance is challenging to manage and a potential security risk ArchivedAmazon Web Services AWS Security Best Practices Page 18 Use Case Description Cross account access To manage access to resources you might have multiple AWS accounts —for example to isolate a developmen t environment from a production environment However users from one account might need to access resources in the other account such as promoting an update from the development environment to the production environment Although users who work in both accounts could have a separate identity in each account managing credentials for multiple accounts makes identity management difficult Identity federation Users might already have identities outside of AWS such as in your corporate directory However those users might need to work with AWS resources (or work with applications that access those resources) If so these users also need AWS security credentials in order to make requests to AWS IAM roles and temporary security credentials address these use cases An IAM role lets you define a set of permissions to access the resources that a user or service needs but the permissions are not attached to a specific IAM user or group Instead IAM users mobile and EC2 based appli cations or AWS services (like Amazon EC2) can programmatically assume a role Assuming the role returns temporary security credentials that the user or application can use to make for programmatic requests to AWS These temporary security credentials have a configurable expiration and are automatically rotated Using IAM roles and temporary security credentials means you don't always have to manage long term credentials and IAM users for each entity that requires access to a resource IAM Roles for Amazon EC2 IAM Roles for Amazon EC2 is a specific implementation of IAM roles that addresses the first use case in Table 6 In the following figure a developer is running an application on an Amazon EC2 instance that requires access to the Amazon S3 bucket name d photos An administrator creates the Get pics role The role includes policies that grant read permissions for the bucket and that allow the developer to launch the role with an Amazon EC2 instance When the application runs on the instance it can acces s the photos bucket by using the role's temporary credentials The administrator doesn't have to grant the developer permission to access the photos bucket and the developer never has to share credentials ArchivedAmazon Web Services AWS Security Best Practices Page 19 Figure 4: How roles fo r EC2 work 1 An administrator uses IAM to create the Getpics role In the role the administrator uses a policy that specifies that only Amazon EC2 instances can assume the role and that specifies only read permissions for the photos bucket 2 A developer lau nches an Amazon EC2 instance and associates the Getpics role with that instance 3 When the application runs it retrieves credentials from the instance metadata on the Amazon EC2 instance 4 Using the role credentials the application accesses the photo bucket with read only permissions Cross Account Access You can use IAM roles to address the second use case in Table 6 by enabling IAM users from another AWS account to access resources within your AWS account This process is referred to as cross accoun t access Cross account access lets you share access to your resources with users in other AWS accounts To establish cross account access in the trusting account (Account A) you create an IAM policy that grants the trusted account (Account B) access to specific resources Account B can then delegate this access to its IAM users Account B cannot delegate more access to its IAM users than the permissions that it has been granted by Account A Identity Federation You can use IAM roles to address the third use case in Table 6 by creating an identity broker that sits between your corporate users and your AWS resources to manage the authentication and authorization process without needing to re create all your users as IAM users in AWS ArchivedAmazon Web Services AWS Security Best Practices Page 20 Figure 5: AWS identity federation with temporary security credentials 1 The enterprise user accesses the identity broker application 2 The identity broker application authenticates the users against the corporate identity store 3 The identity broker application has permissions to access the AWS Security Token Service (STS) to request temporary security credentials 4 Enterprise users can get a temporary URL that gives them access to the AWS APIs or the Management Console A sample identity broker applic ation for use with Microsoft Active Directory is provided by AWS Managing OS level Access to Amazon EC2 Instances The previous section describes the ways in which you can manage access to resources that require authentication to AWS services However in order to access the operating system on your EC2 instances you need a different set of credentials In the shared responsibility model you own the operating system credentials but AWS helps you bootstrap the initial access to the operating system When y ou launch a new Amazon EC2 instance from a standard AMI you can access that instance using secure remote system access protocols such as Secure Shell (SSH) or Windows Remote Desktop Protocol (RDP) You must successfully ArchivedAmazon Web Services AWS Security Best Practices Page 21 authenticate at the operating system level before you can access and configure the Amazon EC2 instance to your requirements After you have authenticated and have remote access into the Amazon EC2 instance you can set up the operating system authentication mechanisms you want which migh t include X509 certificate authentication Microsoft Active Directory or local operating system accounts To enable authentication to the EC2 instance AWS provides asymmetric key pairs known as Amazon EC2 key pairs These are industry standard RSA key pairs Each user can have multiple Amazon EC2 key pairs and can launch new instances using different key pairs EC2 key pairs are not related to the AWS account or IAM user credentials discussed previously Those credentials control access to other AWS services; EC2 key pairs control access only to your specific instance You can choose to generate your own Amazon EC2 key pairs using industry standard tools like OpenSSL You generate the key pair in a secure and trusted environment and only the public ke y of the key pair is imported in AWS; you store the private key securely We advise using a high quality random number generator if you take this path You can choose to have Amazon EC2 key pairs generated by AWS In this case both the private and public key of the RSA key pair are presented to you when you first create the instance You must download and securely store the private key of the Amazon EC2 key pair AWS does not store the private key; if it is lost you must generate a new key pair For Amazon EC2 Linux instances using the cloud init service when a new instance from a standard AWS AMI is launched the public key of the Amazon EC2 key pair is appended to the initial operating system user’s ~/ssh/authorized_keys file That user can then use an SSH client to connect to the Amazon EC2 Linux instance by configuring the client to use the correct Amazon EC2 instance user’s name as its identity (for example ec2 user) and providing the private key file for user authentication For Amazon EC2 Windows instances using the ec2config service when a new instance from a standard AWS AMI is launched the ec2config service sets a new random Administrator password for the instance and encrypts it using the corresponding Amazon EC2 key pair’s public key The us er can get the Windows instance password by using the AWS Management Console or command line tools and by providing the corresponding Amazon EC2 private key to decrypt the password This password along with the default Administrative account for the Amaz on EC2 instance can be used to authenticate to the Windows instance ArchivedAmazon Web Services AWS Security Best Practices Page 22 AWS provides a set of flexible and practical tools for managing Amazon EC2 keys and providing industry standard authentication into newly launched Amazon EC2 instances If you have highe r security requirements you can implement alternative authentication mechanisms including LDAP or Active Directory authentication and disable Amazon EC2 key pair authentication Secure Your Data This section discusses protecting data at rest and in tran sit on the AWS platform We assume that you have already identified and classified your assets and established protection objectives for them based on their risk profiles Resource Access Authorization After a user or IAM role has been authenticated they can access resources to which they are authorized You provide resource authorization using resource policies or capability policies depending on whether you want the user to have control over the resources or whether you want to override individual user control • Resource policies are appropriate in cases where the user creates resources and then wants to allow other users to access those resources In this model the policy is attached directly to the resource and describes who can do what with the resour ce The user is in control of the resource You can provide an IAM user with explicit access to a resource The root AWS account always has access to manage resource policies and is the owner of all resources created in that account Alternatively you ca n grant users explicit access to manage permissions on a resource • Capability policies (which in the IAM docs are referred to as "user based permissions") are often used to enforce company wide access policies Capability policies are assigned to an IAM u ser either directly or indirectly using an IAM group They can also be assigned to a role that will be assumed at run time Capability policies define what capabilities (actions) the user is allowed or denied to perform They can override resource based po licies permissions by explicitly denying them • IAM policies can be used to restrict access to a specific source IP address range or during specific days and times of the day as well as based on other conditions ArchivedAmazon Web Services AWS Security Best Practices Page 23 • Resource policies and capability policies and are cumulative in nature: An individual user’s effective permissions is the union of a resources policies and the capability permissions granted directly or through group membership Storing and Managing Encryption Keys in the Cloud Security measures t hat rely on encryption require keys In the cloud as in an on premises system it is essential to keep your keys secure You can use existing processes to manage encryption keys in the cloud or you can leverage server side encryption with AWS key manage ment and storage capabilities If you decide to use your own key management processes you can use different approaches to store and protect key material We strongly recommend that you store keys in tamper proof storage such as Hardware Security Modules Amazon Web Services provides an HSM service in the cloud known as AWS CloudHSM Alternatively you can use HSMs that store keys on premises and access them over secure links such as IPSec virtual private networks (VPNs) to Amazon VPC or AWS Direct Con nect with IPSec You can use on premises HSMs or CloudHSM to support a variety of use cases and applications such as database encryption Digital Rights Management (DRM) and Public Key Infrastructure (PKI) including authentication and authorization document signing and transaction processing CloudHSM currently uses Luna SA HSMs from SafeNet The Luna SA is designed to meet Federal Information Processing Standard (FIPS) 140 2 and Common Criteria EAL4+ standards and supports a variety of industry standard cryptographic algorithms When you sign up for CloudHSM you receive dedicated single tenant access to CloudHSM appliances Each appliance appears as a resource in your Amazon VPC You not AWS initialize and manage the cryptographic domain of the CloudHSM The cryptographic domain is a logical and physical security boundary that restricts access to your keys Only you can control your keys and operations performed by the CloudHSM AWS administrators manage maintain and monitor the health of the CloudHSM appliance but do not have access to the cryptographic domain After you initialize the cryptographic domain you can configure clients on your EC2 instances that allow applications to use the APIs provided by CloudHSM Your applicat ions can use the standard APIs supported by the CloudHSM such as PKCS#11 MS CAPI and Java JCA/JCE (Java Cryptography Architecture/Java Cryptography Extensions) The CloudHSM client provides the APIs to your applications ArchivedAmazon Web Services AWS Security Best Practices Page 24 and implements each API call by c onnecting to the CloudHSM appliance using a mutually authenticated SSL connection You can implement CloudHSMs in multiple Availability Zones with replication between them to provide for high availability and storage resilience Protecting Data at Rest For regulatory or business requirement reasons you might want to further protect your data at rest stored in Amazon S3 on Amazon EBS Amazon RDS or other services from AWS Table 7 lists concern to consider when you are implementing protection of data at r est on AWS Table 7: Threats to data at rest Concern Recommended Protection Approach Strategies Accidental information disclosure Designate data as confidential and limit the number of users who can access it Use AWS permissions to manage access to resources for services such as Amazon S3 Use encryption to protect confidential data on Amazon EBS or Amazon RDS Permissions File partition volume or application level encryption Data integrity compromise To ensure that data integrity is not compromised through deliberate or accidental modification use resource permissions to limit the scope of users who can modify the data Even with resource permissions accidental deletion by a privileged user is still a t hreat (including a potential attack by a Trojan using the privileged user’s credentials) which illustrates the importance of the principle of least privilege Perform data integrity checks such as Message Authentication Codes (SHA 1/SHA 2) or Hashed Mes sage Authentication Codes (HMACs) digital signatures or authenticated encryption (AES GCM) to detect data integrity compromise If you detect data compromise restore the data from backup or in the case of Amazon S3 from a previous object version Permissions Data integrity checks (MAC/HMAC/Digital Signatures/Authenticated Encryption) Backup Versioning (Amazon S3) ArchivedAmazon Web Services AWS Security Best Practices Page 25 Concern Recommended Protection Approach Strategies Accidental deletion Using the correct permissions and the rule of the least privilege is the best protection against accidental or malicious deletion For services such as Amazon S3 you can use MFA Delete to require multi factor authentication to delete an object limiting access to Amazon S3 objects to privileged users If you detect data compromise restore the data f rom backup or in the case of Amazon S3 from a previous object version Permissions Backup Versioning (Amazon S3) MFA Delete (Amazon S3) System infrastructure hardware or software availability In the case of a system failure or a natural disaster restore your data from backup or from replicas Some services such as Amazon S3 and Amazon DynamoDB provide automatic data replication between multiple Availability Zones within a region Other services require you to configure replication or backups Backup Replication Analyze the threat landscape that applies to you and employ the relevant protection techniques as outlined in Table 1 The following sections describe how you can configure different services from AWS to protect data at rest Protecting Data at Rest on Amazon S3 Amazon S3 provides a number of security features for protection of data at rest which you can use or not depending on your threat profile Table 8 summarizes these features: Table 8: Amazon S3 features for protecting data at rest Amazon S3 Feature Description Permissions Use bucket level or object level permissions alongside IAM policies to protect resources from unauthorized access and to prevent information disclosure data integrity compromise or deletion Versioning Amazon S3 supports object versions Versioning is disabled by default Enable versioning to store a new versio n for every modified or deleted object from which you can restore compromised objects if necessary ArchivedAmazon Web Services AWS Security Best Practices Page 26 Amazon S3 Feature Description Replication Amazon S3 replicates each object across all Availability Zones within the respective region Replication can provide data and service availabil ity in the case of system failure but provides no protection against accidental deletion or data integrity compromise –it replicates changes across all Availability Zones where it stores copies Amazon S3 offers standard redundancy and reduced redundancy o ptions which have different durability objectives and price points Backup Amazon S3 supports data replication and versioning instead of automatic backups You can however use application level technologies to back up data stored in Amazon S3 to other AWS regions or to on premises backup systems Encryption –server side Amazon S3 supports server side encryption of user data Server side encryption is transparent to the end user AWS generates a unique encryption key for each object and then encry pts the object using AES 256 The encryption key is then encrypted itself using AES 256with a master key that is stored in a secure location The master key is rotated on a regular basis Encryption –client side With client side encryption you create and manage your own encryption keys Keys you create are not exported to AWS in clear text Your applications encrypt data before submitting it to Amazon S3 and decrypt data after receiving it from Amazon S3 Data is stored in an encrypted form wi th keys and algorithms only known to you While you can use any encryption algorithm and either symmetric or asymmetric keys to encrypt the data the AWS provided Java SDK offers Amazon S3 client side encryption features See Further Reading for more information Protecting Data at Rest on Amazon EBS Amazon EBS is the AWS abstract block storage service You receive each Amazon EBS volume in raw unformatted mode as if it were a new hard disk You can partition the Amaz on EBS volume create software RAID arrays format the partitions with any file system you choose and ultimately protect the data on the Amazon EBS volume All of these decisions and operations on the Amazon EBS volume are opaque to AWS operations You ca n attach Amazon EBS volumes to Amazon EC2 instances Table 9 summarizes features for protecting Amazon EBS data at rest with the operating system running on an Amazon EC2 instance ArchivedAmazon Web Services AWS Security Best Practices Page 27 Table 9: Amazon EBS features for protecting data at rest Amazon EBS Feature Description Replication Each Amazon EBS volume is stored as a file and AWS creates two copies of the EBS volume for redundancy Both copies reside in the same Availability Zone however so while Amazon EBS replication can survive hardware failure; it is not suitable as an availability tool for prolonged outages or disaster recovery purposes We recommend that you replicate data at the application level and/or create backups Backup Amazon EBS provides snapshots that captu re the data stored on an Amazon EBS volume at a specific point in time If the volume is corrupt (for example due to system failure) or data from it is deleted you can restore the volume from snapshots Amazon EBS snapshots are AWS objects to which IAM users groups and roles can be assigned permissions so that only authorized users can access Amazon EBS backups Encryption: Microsoft Windows EFS If you are running Microsoft Windows Server on AWS and you require an additional level of data confidentiality you can implement Encrypted File System (EFS) to further protect sensitive data stored on system or data partitions EFS is an extension to the NTFS file system that provides for transparent file and folder encryption and integrates with Windows and Active Directory key management facilities and PKI You can manage your own keys on EFS Encryption: Microsoft Windows BitLocker is a volume (or partition in the case of single drive) encryption solution included in Windows Server 2008 and l ater operating systems BitLocker uses Windows Bitlocker AES 128 and 256 bit encryption By default BitLocker requires a Trusted Platform Module (TPM) to store keys; this is not supported on Amazon EC2 However you can protect EBS volumes using BitLock er if you configure it to use a password Encryption: Linux dmcrypt On Linux instances running kernel versions 26 and later you can use dm crypt to configure transparent data encryption on Amazon EBS volumes and swap space You can use various ciphers as well as Linux Unified Key Setup (LUKS) for key management Encryption: TrueCrypt TrueCrypt is a third party tool that offers transparent encryption of data at rest on Amazon EBS volumes TrueCrypt supports both Microsoft Windows and L inux operating systems Encryption and integrity authentication: SafeNet ProtectV SafeNet ProtectV is a third party offering that allows for full disk encryption of Amazon EBS volumes and pre boot authentication of AMIs SafeNet ProtectV provides data confidentiality and data integrity authentication for data and the underlying operating system ArchivedAmazon Web Services AWS Security Best Practices Page 28 Protecting Data at Rest on Amazon RDS Amazon RDS leverages the same secure infrastructure as Amazon EC2 You can use the Amazon RDS service without additional protection but if you require encryption or data integrity authentication of data at rest for compliance or other purposes you can add protection at the application layer or at the platform layer using SQL cryptographic functions You could add protecti on at the application layer for example using a built in encryption function that encrypts all sensitive database fields using an application key before storing them in the database The application can manage keys by using symmetric encryption with PK I infrastructure or other asymmetric key techniques to provide for a master encryption key You could add protection at the platform using MySQL cryptographic functions; which can take the form of a statement like the following: INSERT INTO Customers (Cust omerFirstNameCustomerLastName) VALUES (AES_ENCRYPT('John'@key) AES_ENCRYPT('Smith'@key); Platform level encryption keys would be managed at the application level like application level encryption keys Table 10 summarizes Amazon RDS platform level protection options Table 10: Amazon RDS platform level data protection at rest Amazon RDS Platform Comment MySQL MySQL cryptographic functions include encryption hashing and compression For more information see https://devmysqlcom/doc/refman/55/en/encryption functionshtml Oracle Oracle Transparent Data Encryption is supported on Amazon RDS for Oracle Enterprise Edition under the Bring Your Own License (BYOL) model Microsoft SQL Microsoft Transact SQL data protection functions include encryption signing and hashing For more information see http://msdnmicrosoftcom/en us/library/ms173744 Note that SQL range queries are no longer applicable to the encrypted portion of the data This query for example would not return the expected results for names like “John” “Jonathan” and “Joan” if the contents of column CustomerFirstName is encrypted at the application or platform layer: ArchivedAmazon Web Services AWS Security Best Practices Page 29 SELECT CustomerFirstName CustomerLastName from Customers WHERE CustomerName LIKE 'Jo%';” Direct comparisons such as the following would work and return the expected result for all fields where CustomerFirstName matches “John” exactly SELECT CustomerFirstName CustomerLastName FROM Customers WHERE CustomerFirstName = AES_ENCRYPT('John' @key); Range queries would also work on fields that are not encrypted For example a Date field in a table could be left unencrypt ed so that you could use it in range queries Oneway functions are a good way to obfuscate personal identifiers such as social security numbers or equivalent personal IDs where they are used as unique identifiers While you can encrypt personal identifie rs and decrypt them at the application or platform layer before using them it’s more convenient to use a one way function such as keyed HMAC SHA1 to convert the personal identifier to a fixed length hash value The personal identifier is still unique because collisions in commercial HMACs are extremely rare The HMAC is not reversible to the original personal identifier however so you cannot track back the data to the original individual unless you know the original personal ID and process it via th e same keyed HMAC function In all regions Amazon RDS supports Transparent Data Encryption and Native Network Encryption both of which are components of the Advanced Security option for the Oracle Database 11g Enterprise Edition Oracle Database 11g Ente rprise Edition is available on Amazon RDS for Oracle under the Bring Your OwnLicense (BYOL) model There is no additional charge to use these features Oracle Transparent Data Encryption encrypts data before it is written to storage and decrypts data whe n it is read from storage With Oracle Transparent Data Encryption you can encrypt table spaces or specific table columns using industry standard encryption algorithms such as Advanced Encryption Standard (AES) and Data Encryption Standard (Triple DES) Protecting Data at Rest on Amazon S3 Glacier Data at rest stored in Amazon S3 Glacier is automatically server side encrypted using 256bit Advanced Encryption Standard (AES 256) with keys maintained by AWS The encryption key is then encrypted itself using AES256 with a master key that is stored in ArchivedAmazon Web Services AWS Security Best Practices Page 30 a secure location The master key is rotated on a regular basis For more information about the default encryption behavior for an Amazon S3 bucket see Amazon S3 Default Encryption Protecting Data at Rest on Amazon DynamoDB Amazon DynamoDB is a shared service from AWS You can use DynamoDB without adding protection but you can also implement a data encryption layer over the s tandard DynamoDB service See the previous section for considerations for protecting data at the application layer including impact on range queries DynamoDB supports number string and raw binary data type formats When storing encrypted fields in Dyna moDB it is a best practice to use raw binary fields or Base64 encoded string fields Protecting Data at Rest on Amazon EMR Amazon EMR is a managed service in the cloud AWS provides the AMIs required to run Amazon EMR and you can’t use custom AMIs or you r own EBS volumes By default Amazon EMR instances do not encrypt data at rest Amazon EMR clusters often use either Amazon S3 or DynamoDB as the persistent data store When an Amazon EMR cluster starts it can copy the data required for it to operate fro m the persistent store into HDFS or use data directly from Amazon S3 or DynamoDB To provide for a higher level of data at rest confidentiality or integrity you can employ a number of techniques summarized in Table 11 Table 11: Protecting data at rest in Amazon EMR Requirement Description Amazon S3 server side encryption –no HDFS copy Data is permanently stored on Amazon S3 only and not copied to HDFS at all Hadoop fetches data from Amazon S3 and processes it locally without making persistent local copies See the Protecting Data at Rest on Amazon S3 section for more information on Amazon S3 server side encryption ArchivedAmazon Web Services AWS Security Best Practices Page 31 Requirement Description Amazon S3 client side encryption Data is permanently stored on Am azon S3 only and not copied to HDFS at all Hadoop fetches data from Amazon S3 and processes it locally without making persistent local copies To apply client side decryption you can use a custom Serializer/Deserializer (SerDe) with products such as Hiv e or InputFormat for Java Map Reduce jobs Apply encryption at each individual row or record so that you can split the file See the Protecting Data at Rest on Amazon S3 section for more information on Amazon S3 cli entside encryption Application level encryption –entire file encrypted You can encrypt or protect the integrity of the data (for example by using HMAC SHA1) at the application level while you store data in Amazon S3 or DynamoDB To decrypt the data you would use a custom SerDe with Hive or a script or a bootstrap action to fetch the data from Amazon S3 decrypt it and load it into HDFS befo re processing Because the entire file is encrypted you might need to execute this action on a single node such as the master node You can use tools such as S3Distcp with special codecs Application level encryption –individual fields encrypted/structur e preserved Hadoop can use a standard SerDe such as JSON Data decryption can take place during the Map stage of the Hadoop job and you can use standard input/output redirection via custom decryption tools for streaming jobs Hybrid You might want to employ a combination of Amazon S3 server side encryption and client side encryption as well as application level encryption AWS Partner Network (APN) partners provide specialized solutions for protecting data at rest and in transit on Amazon EMR for more information visit the AWS Security Partner Solutions page Decommission Data and Media Securely You decommission data differently in the cloud than you do in traditional on premises environments When you ask AWS to delete data in the cloud AWS does not decommission the underlying physical media; instead the storage blocks are mark ed as unallocated AWS uses secure mechanisms to reassign the blocks elsewhere When you provision block storage the hypervisor or virtual machine manager (VMM) keeps track of which blocks your instance has written to When an instance writes to a block o f storage the previous ArchivedAmazon Web Services AWS Security Best Practices Page 32 block is zeroed out and then overwritten with your block of data If your instance attempts to read from a block previously written to your previously stored data is returned If an instance attempts to read from a block it has no t previously written to the hypervisor zeros out the previous data on disk and returns a zero to the instance When AWS determines that media has reached the end of its useful life or it experiences a hardware fault AWS follows the techniques detailed i n Department of Defense (DoD) 522022 M (“National Industrial Security Program Operating Manual”) or NIST SP 800 88 (“Guidelines for Media Sanitization”) to destroy data as part of the decommissioning process For more information about deletion of data in the cloud see the AWS Overview of Security Processes whitepaper When you have regulatory or business reasons to require further controls for securely decommissioning data you can implement data encryption at rest using customer managed keys which are not stored in the cloud Then in addition to following the previous process you would delete the key used to protect the decommissioned data making it irrecoverable Protec t Data in Transit Cloud applications often communicate over public links such as the Internet so it is important to protect data in transit when you run applications in the cloud This involves protecting network traffic between clients and servers and network traffic between servers Table 12 lists common concerns to communication over public links such as the Internet Table 12: Threats to data in transit Concern Comments Recommended Protection Accidental information disclosure Access to your confidential data should be limited When data is traversing the public network it should be protected from disclosure through encryption Encrypt data in transit using IPSec ESP and/or SSL/TLS ArchivedAmazon Web Services AWS Security Best Practices Page 33 Concern Comments Recommended Protection Data integrity compromise Whether or not data is confidential you want to know that data integrity is not compromised through deliberate or accidental modification Authenticate data integrity using IPSec ESP/AH and/or SSL/TLS Peer identity compromise/ identity spoofing/ man inthe middle Encryption and data integrity authentication are important for protecting the communications channel It is equally important to authenticate the identity of the remote end of the connection An encrypted channel is worthless if the remote end happens to be an attacker or an imposter relaying the connection to the intended recipient Use IPSec with IKE with pre shared keys or X509 certificates to authenticate the remote end Alternatively use SSL/TLS with server certificate authentication based on the server common name (CN) or Alternative Name (AN/SAN) Services from AWS provide support for both IPSec and SSL/TLS for protection of data in transit IPSec is a protocol that extends the IP protocol stack often in n etwork infrastructure and allows applications on upper layers to communicate securely without modification SSL/TLS on the other hand operates at the session layer and while there are third party SSL/TLS wrappers it often requires support at the appli cation layer as well The following sections provide details on protecting data in transit Managing Application and Administrative Access to AWS Public Cloud Services When accessing applications running in the AWS public cloud your connections traverse t he Internet In most cases your security policies consider the Internet an insecure communications medium and require application data protection in transit Table 13 outlines approaches for protecting data in transit when accessing public cloud services ArchivedAmazon Web Services AWS Security Best Practic es Page 34 Table 13: Protecting application data in transit when accessing public cloud Protocol/Scenari o Description Recommended Protection Approach HTTP/HTTPS traffic (web applications) By default HTTP traffic is unprotected SSL/TLS protection for HTTP traffic also known as HTTPS is industry standard and widely supported by web servers and browsers HTTP traffic can include not just client access to web pages but also web services (REST based access) as well Use HTTPS (HTTP over SSL/TLS) with server certificate authentication HTTPS offload (web applications) While using HTTPS is often recommended especially for sensitive data SSL/TLS processing requires additional CPU and memory resources from both the web server and the client This can put a considerable load on web servers handling thousands of SSL/TLS sessions There is less impact on the client where only a limited number of SSL/TLS connections are terminated Offload HTTPS processing on Elastic Load Balancing to minimize impact on web servers while still protecting data in transit Further protect the backend connection to instances using an application protocol such as HTTP over SSL Remote Desktop Protocol (RDP) traffic Users who access Windows Terminal Services in the public cloud usually use the Microsoft Remote Desktop Protocol (RDP) By default RDP connections establish an underlying SSL/TLS connection For optimal protection the Windows server being accessed should be issued a trusted X50 9 certificate to protect from identity spoofing or man inthemiddle attacks By default Windows RDP servers use selfsigned certificates which are not trusted and should be avoided ArchivedAmazon Web Services AWS Security Best Practices Page 35 Protocol/Scenari o Description Recommended Protection Approach Secure Shell (SSH) traffic SSH is the preferred approach for establi shing administrative connections to Linux servers SSH is a protocol that like SSL provides a secure communications channel between the client and the server In addition SSH also supports tunneling which you should use for running applications such as XWindows on top of SSH and protecting the application session in transit Use SSH version 2 using non privileged user accounts Database server traffic If clients or servers need to access databases in the cloud they might need to traverse the Internet as well Most modern databases support SSL/TLS wrappers for native database protocols For database servers running on Amazon EC2 we recommend this approach to protecting data in transit Amazon RDS provides support for SSL/TLS in some cases See the Protecting Data in Transit to Amazon RDS section for more details Protecting Data in Transit when Managing AWS Services You can manage your services from AWS such as Amazon EC2 and Amazon S3 using the AWS Man agement Console or AWS APIs Examples of service management traffic include launching a new Amazon EC2 instance saving an object to an Amazon S3 bucket or amending a security group on Amazon VPC The AWS Management Console uses SSL/TLS between the client browser and console service endpoints to protect AWS service management traffic Traffic is encrypted data integrity is authenticated and the client browser authenticates the identity of the console service endpoint by using an X509 certificate After an SSL/TLS session is established between the client browser and the console service endpoint all subsequent HTTP traffic is protected within the SSL/TLS session You can alternatively use AWS APIs to manage services from AWS either directly from applicat ions or third party tools or via SDKs or via AWS command line tools AWS ArchivedAmazon Web Services AWS Security Best Practices Page 36 APIs are web services (REST) over HTTPS SSL/TLS sessions are established between the client and the specific AWS service endpoint depending on the APIs used and all subsequent tr affic including the REST envelope and user payload is protected within the SSL/TLS session Protecting Data in Transit to Amazon S3 Like AWS service management traffic Amazon S3 is accessed over HTTPS This includes all Amazon S3 service management requ ests as well as user payload such as the contents of objects being stored/retrieved from Amazon S3 and associated metadata When the AWS service console is used to manage Amazon S3 an SSL/TLS secure connection is established between the client browser a nd the service console endpoint All subsequent traffic is protected within this connection When Amazon S3 APIs are used directly or indirectly an SSL/TLS connection is established between the client and the Amazon S3 endpoint and then all subsequent HTTP and user payload traffic is encapsulated within the protected session Protecting Data in Transit to Amazon RDS If you’re connecting to Amazon RDS from Amazon EC2 instances in the same region you can rely on the security of the AWS networ k but if you’re connecting from the Internet you might want to use SSL/TLS for additional protection SSL/TLS provides peer authentication via server X509 certificates data integrity authentication and data encryption for the client server connection SSL/TLS is currently supported for connections to Amazon RDS MySQL and Microsoft SQL instances For both products Amazon Web Services provides a single self signed certificate associated with the MySQL or Microsoft SQL listener You can download the selfsigned certificate and designate it as trusted This provides for peer identity authentication and prevents man inthemiddle or identity spoofing attacks on the server side SSL/TLS provides for native encryption and data integrity authentication of the communications channel between the client and the server Because the same self signed certificate is used on all Amazon RDS MySQL instances on AWS and another single self signed certificate is used across all Amazon RDS Microsoft SQL instances on AWS peer identity authentication does not provide for individual instance authentication If you require individual server authentication via SSL/TLS you might need to leverage Amazon EC2 and self managed relational database services ArchivedAmazon Web Services AWS Security Best Practices Page 37 Amazon RDS for Oracle Na tive Network Encryption encrypts the data as it moves into and out of the database With Oracle Native Network Encryption you can encrypt network traffic travelling over Oracle Net Services using industry standard encryption algorithms such as AES and Tri ple DES Protecting Data in Transit to Amazon DynamoDB If you're connecting to DynamoDB from other services from AWS in the same region you can rely on the security of the AWS network but if you're connecting to DynamoDB across the Internet you should u se HTTP over SSL/TLS (HTTPS) to connect to DynamoDB service endpoints Avoid using HTTP for access to DynamoDB and for all connections across the Internet Protecting Data in Transit to Amazon EMR Amazon EMR includes a number of application communication paths each of which requires separate protection mechanisms for data in transit Table 14 outlines the communications paths and the protection approach we recommend Table 14: Protecting data in transit on Amazon EMR Type of Amazon EMR Traffic Description Recommended Protection Approach Between Hadoop nodes Hadoop Master Worker and Core nodes all communicate with one another using proprietary plain TCP connections However all Hadoop nodes on Amazon EMR reside in the same Availability Zone and are protected by security standards at the physical and infrastructure layer No additional protection typically required – all nodes reside in the same facility Between Hadoop Cluster and Amazon S3 Amazon EMR uses HTTPS to send data between DynamoDB and Amazon EC2 For more information see the Protecting Data in Transit to Amazon S3 section HTTPS used by default ArchivedAmazon Web Services AWS Security Best Practices Page 38 Type of Amazon EMR Traffic Description Recommended Protection Approach Between Hadoop Cluster and Amazon DynamoDB Amazon EMR uses HTTPS to send data between Amazon S3 and Amazon EC2 For more information see the Protecting Data in Transit to Amazon DynamoDB section HTTPS used by default Use SSL/TLS if Thrift REST or Avro are used User or application access to Hadoop cluster Clients or applications on premises can access Amazon EMR clusters across the Internet using scripts (SSH based access) REST or protocols such as Thrift or Avro Use SSH for interactive access to applications or for tunneling other protocols within SSH Administrative access to Hadoop cluster Amazon EMR cluster administrators typically use SSH to manage the cluster Use SSH to the Amazon EMR master node Secure Your Operating Systems and Applications With the AWS shared responsibility model you manage your operating systems and applications security Amazon EC2 presents a true virtual computing environment in which you can use web service interfaces to launch instances with a variety of operating systems with custom preloaded applications You can standardize operating system and application builds and centrally manage the security of your operating systems and applications in a single secure build repository You can build and test a pre configured AMI to meet your security requirements Recommendations include: • Disable root API access keys and secret key • Restrict access to instances from limited IP ranges using Security Groups • Password protect the pem file on user machines ArchivedAmazon Web Services AWS Securit y Best Practices Page 39 • Delete keys f rom the authorizedkeys file on your instances when someone leaves your organization or no longer requires access • Rotate credentials (DB Access Keys) • Regularly run least privilege checks using IAM user Access Advisor and IAM user Last Used Access Keys • Use bastion hosts to enforce control and visibility This section is not intended to provide a comprehensive list of hardening standards for AMIs Sources of industry accepted system hardening standards include but are not limited to: • Center for Internet Secu rity (CIS) • International Organization for Standardization (ISO) • SysAdmin Audit Network Security (SANS) Institute • National Institute of Standards Technology (NIST) We recommend that you develop configuration standards for all system components Ensure that these standards address all known security vulnerabilities and are consistent with industry accepted system hardening standards If a published AMI is found to be in violation of best practices or poses a significant risk to customers running the AMI AWS reserves the right to take measures to remove the AMI from the public catalog and notify the publisher and those running the AMI of the findings Creating Custom AMIs You can create your own AMIs that meet the specific requirements of your organization and publish them for internal (private) or external (public) use As a publisher of an AMI you are responsible for the initial security posture of the machine images tha t you use in production The security controls you apply on the AMI are effective at a specific point in time they are not dynamic You can configure private AMIs in any way that meets your business needs and does not violate the AWS Acceptable Use Policy For more information see the Amazon Web Services Acceptable Use Policy Users who launch from AMIs however might not be security experts so we recommend that you meet certain minimum security standards ArchivedAmazon Web Services AWS Security Best Practices Page 40 Before you publish an AMI make sure that the published software is up to date with relevant security patches and perform the clean up and hardening tasks listed in Table 15 Table 15: Clean up tasks before publishing an AMI Area Recommended Task Disable insecure applications Disable services and protocols that authenticate users in clear text over the network or otherwise insecurely Minimize exposure Disable non essential network services on startup Only administrative services (SSH/RDP) and the services required for essential applications should be started Protect credentials Securely delete all AWS credentials from disk and configuration files Protect credentials Securely delete any third party crede ntials from disk and configuration files Protect credentials Securely delete all additional certificates or key material from the system Protect credentials Ensure that software installed does not use default internal accounts and passwords Use good governance Ensure that the system does not violate the Amazon Web Services Acceptable Use Policy Examples of violations include open SMTP relays or proxy servers For more information see the Amazon Web Se rvices Acceptable Use Policy Tables 16 and 17 list additional operating system specific clean up tasks Table 16 lists the steps for securing Linux AMIs Table 16: Securing Linux/UNIX AMIs Area Hardening Activity Secure services Configure sshd to allow only public key authentication Set PubkeyAuthentication to Yes and PasswordAuthentication to No in sshd_config Secure services Generate a unique SSH host key on instance creation If the AMI uses cloud init it will hand le this automatically ArchivedAmazon Web Services AWS Security Best Practices Page 41 Area Hardening Activity Protect credentials Remove and disable passwords for all user accounts so that they cannot be used to log in and do not have a default password Run passwd l <USERNAME> for each account Protect credentials Securely delete all user SSH public and private key pairs Protect data Securely delete all shell history and system log files containing sensitive data Table 17: Securing Windows AMIs Area Hardening Activity Protect credentials Ensure that all enabled user accounts have new randomly generated passwords upon instance creation You can configure the EC2 Config Service to do this for the Administrator account upon boot but you must explicitly do so before bundling the image Protect credentials Ensure that the Guest account is disabled Protect data Clear the Windows event logs Protect credentials Make sure the AMI is not a part of a Windows domain Minimizing exposure Do not enable any file sharing print spooler RPC and other Windows services that are not essential but are enabled by default Bootstrapping After the hardened AMI is instantiated you can still amend and update security controls by using bootstrapping applications Common bootstrapping applications includ e Puppet Chef Capistrano Cloud Init and Cfn Init You can also run custom bootstrapping Bash or Microsoft Windows PowerShell scripts without using third party tools Here are a few bootstrap actions to consider: • Security software updates install the lat est patches service packs and critical updates beyond the patch level of the AMI ArchivedAmazon Web Services AWS Security Best Practices Page 42 • Initial application patches install application level updates beyond the current application level build as captured in the AMI • Contextual data and configuration enables instances to apply configurations specific to the environment in which they are being launched –production test or DMZ/internal for example • Register instances with remote security monitoring and management systems Managing Patches You are responsible f or patch management for your AMIs and live instances We recommend that you institutionalize patch management and maintain a written procedure While you can use third party patch management systems for operating systems and major applications it is a goo d practice to keep an inventory of all software and system components and to compare the list of security patches installed on each system to the most recent vendor security patch list to verify that current vendor patches are installed Implement proces ses to identify new security vulnerabilities and assign risk rankings to such vulnerabilities At a minimum rank the most critical highest risk vulnerabilities as “High” Controlling Security for Public AMIs Take care that you don’t leave important crede ntials on AMIs when you share them publicly For more information see How To Share and Use Public AMIs in A Secure Manner Protecting Your System from Malware Protect your systems in the cloud as you would protect a conventional infrastructure from threats such as viruses worms Trojans rootkits botnets and spam It’s important to understand the implications of a malware infection to an individual instance as well as to the entire cloud system: When a user –wittingly or unwittingly – executes a program on a Linux or Windows system the executable assumes the privileges of that user (or in some cases impersonates another user) The code can carry out an y action that the user who launched it has permissions for Users must ensure that they only execute trusted code ArchivedAmazon Web Services AWS Security Best Practices Page 43 If you execute a piece of untrusted code on your system it’s no longer your system –it belongs to someone else If a superuser or a user with administrative privileges executes an untrusted program the system on which the program was executed can no longer be trusted –malicious code might change parts of the operating system install a rootkit or establish back doors for accessing the system It might delete data or compromise data integrity or compromise the availability of services or disclose information in a covert or overt fashion to third parties Consider the instance on which the code was executed to be infected If the infected insta nce is part of a single sign on environment or if there is an implicit trust model for access between instances the infection can quickly spread beyond the individual instance into the entire system and beyond An infection of this scale can quickly lead to data leakage data and service compromise and it can erode the company’s reputation It might also have direct financial consequences if for example it compromises services to third parties or over consumes cloud resources You must manage the threat of malware Table 18 outlines some common approaches to malware protection Table 18: Approaches for protection from malware Factor Common Approaches Untrusted AMIs Launch instances from trusted AMIs only Trusted AMIs include t he standard Windows and Linux AMIs provided by AWS and AMIs from trusted third parties If you derive your own custom AMIs from the standard and trusted AMIs all the additional software and settings you apply to it must be trusted as well Launching an un trusted third party AMI can compromise and infect your entire cloud environment Untrusted software Only install and run trusted software from a trusted software provider A trusted software provider is one who is well regarded in the industry and develops software in a secure and responsible fashion not allowing malicious code into its software packages Open source software can also be trusted software and you should be able to compile your own executables We strongly recommend that you perform careful code reviews to ensure that source code is non malicious Trusted software providers often sign their software using code signing certificates or provide MD5 or SHA 1 signatures of their products so that you can verify the integrity of the softwar e you download ArchivedAmazon Web Services AWS Security Best Practices Page 44 Factor Common Approaches Untrusted software depots You download trusted software from trusted sources Random sources of software on the Internet or elsewhere on the network might actually be distributing malware inside an otherwise legitimate and reputable softwa re package Such untrusted parties might provide MD5 or SHA 1 signatures of the derivative package with malware in it so such signatures should not be trusted We advise that you set up your own internal software depots of trusted software for your users to install and use Strongly discourage users from the dangerous practice of downloading and installing software from random sources on the Internet Principle of least privilege Give users the minimum privileges they need to carry out their tasks That way even if a user accidentally launches an infected executable the impact on the instance and the wider cloud system is minimized Patching Patch external facing and internal systems to the latest security level Worms often spread through unpatched systems on the network Botnets If an infection –whether from a conventional virus a Trojan or a worm –spreads beyond the individual instance and infects a wider fleet it might carry malicious code that creates a botnet –a network of infected ho sts that can be controlled by a remote adversary Follow all the previous recommendations to avoid a botnet infection Spam Infected systems can be used by attackers to send large amounts of unsolicited mail (spam) AWS provides special controls to limit how much email an Amazon EC2 instance can send but you are still responsible for preventing infection in the first place Avoid SMTP open relay which can be used to spread spam and which might also represent a breach of the AWS Acceptable Use Poli cy For more information see the Amazon Web Services Acceptable Use Policy Antivirus/ Antispam software Be sure to use a reputable and up todate antiv irus and antispam solution on your system Host based IDS software Many AWS customers install host based IDS software such as the open source product OSSEC that includes file integrity checking and rootkit detection software Use these products to analy ze important system files and folders and calculate checksum that reflect their trusted state and then regularly check to see whether these files have been modified and alert the system administrator if so If an instance is infected antivirus software might be able to detect the infection and remove the virus We recommend the most secure and widely recommended approach which is to save all the system data then reinstall all the systems platforms and ArchivedAmazon Web Services AWS Security Best Practices Page 45 application executables from a trusted source and then restore the data only from backup Mitigating Compromise and Abuse AWS provides a global infrastructure for customers to build solutions on many of which face the Internet Our customer solutions must operate in a manner that does no harm to the rest of Internet community that is they must avoid abuse activities Abuse activities are externally observed behaviors of AWS customers’ instances or other resources that are malicious offensive illegal or could harm other Internet sites AWS works with you to detect and address suspicious and malicious activities from your AWS resources Unexpected or suspicious behaviors from your resources can indicate that your AWS resources have been compromised which signals potential risks to your business AWS uses the following mechanisms to detect abuse activities from customer resources : • AWS internal event monitoring • External security intelligence against AWS network space • Internet abuse complaints against AWS re sources While the AWS abuse response team aggressively monitors and shuts down malicious abusers or fraudsters running on AWS the majority of abuse complaints refer to customers who have legitimate business on AWS Common causes of unintentional abuse act ivities include: • Compromised resource For example an unpatched Amazon EC2 instance could be infected and become a botnet agent • Unintentional abuse For example an overly aggressive web crawler might be classified as a DOS attacker by some Internet site s • Secondary abuse For example an end user of the service provided by an AWS customer might post malware files on a public Amazon S3 bucket • False complaints Internet users might mistake legitimate activities for abuse AWS is committed to working with AWS customers to prevent detect and mitigate abuse and to defend against future re occurrences When you receive an AWS abuse warning your security and operational staffs must immediately investigate the matter Delay can prolong the damage to other In ternet sites and lead to reputation and legal ArchivedAmazon Web Services AWS Security Best Practices Page 46 liability for you More importantly the implicated abuse resource might be compromised by malicious users and ignoring the compromise could magnify damages to your business Malicious illegal or harmful act ivities that use your AWS resources violate the AWS Acceptable Use Policy and can lead to account suspension For more information see the Amazon Web Services Acceptable Use Policy It is your responsibility to ma intain a well behaved service as evaluated by the Internet community If an AWS customer fails to address reported abuse activities AWS will suspend the AWS account to protect the integrity of the AWS platform and the Internet community Table 19 lists b est practices that can help you respond to abuse incidents: Table 19: Best practices for mitigating abuse Best Practice Description Never ignore AWS abuse communication When an abuse case is filed AWS immediately sends an email notification to the customer’s registered email address You can simply reply to the abuse warning email to exchange information with the AWS abuse response team All communications are saved in the AWS abuse tracking system for future reference The AWS abuse response team is committed to helping customers to understand the nature of the complaints AWS helps customers to mitigate and prevent abuse activities Account suspension is the last action the AWS abuse response team takes to stop abuse activ ities We work with our customers to mitigate problems and avoid having to take any punitive action But you must respond to abuse warnings take action to stop the malicious activities and prevent future re occurrence Lack of customer response is the le ading reason for instance and account blocks Follow security best practices The best protection against resource compromise is to follow the security best practices outlined in this document While AWS provides certain security tools to help you establi sh strong defenses for your cloud environment you must follow security best practices as you would for servers within your own data center Consistently adopt simple defense practices such as applying the latest software patches restricting network traf fic via a firewall and/or Amazon EC2 security groups and providing least privilege access to users ArchivedAmazon Web Services AWS Security Best Practices Page 47 Best Practice Description Mitigation to compromises If your computing environment has been compromised or infected we recommend taking the following steps to recover to a safe state: Consider any known compromised Amazon EC2 instance or AWS resource unsafe If your Amazon EC2 instance is generating traffic that cannot be explained by your application usage your instance has probably been compromised or infected with malici ous software Shut down and rebuild that instance completely to get back to a safe state While a fresh re launch can be challenging in the physical world in the cloud environment it is the first mitigation approach You might need to carry out forensic a nalysis on a compromised instance to detect the root cause Only well trained security experts should perform such an investigation and you should isolate the infected instance to prevent further damage and infection during the investigation To isolate a n Amazon EC2 instance for investigation you can set up a very restrictive security group for example close all ports except to accept inbound SSH or RDP traffic from one single IP address from which the forensic investigator can safely examine the insta nce You can also take an offline Amazon EBS snapshot of the infected instance and then deliver the offline snapshot to forensic investigators for deep analysis AWS does not have access to the private information inside your instances or other resources so we cannot detect guest operating system or application level compromises such as application account take over AWS cannot retroactively provide information (such as access logs IP traffic logs or other attributes) if you are not recording that infor mation via your own tools Most deep incident investigation and mitigation activities are your responsibility The final step you must take to recover from compromised Amazon EC2 instances is to back up key business data completely terminate the infected instances and re launch them as fresh resources To avoid future compromises we recommend that you review the security control environment on the newly launched instances Simple steps like applying the latest software patches and restricting firewalls g o a long way Set up security communication email address The AWS abuse response team uses email for abuse warning notifications By default this email goes to your registered email address but if you are in a large enterprise you might want to create a dedicated response email address You can set up additional email addresses on your Personal Information page under Configure Additional Contacts ArchivedAmazon Web Services AWS Security Best Practices Page 48 Using Additional Application Security Practices Here are some additional general security best practices for your operating systems and applications: • Always change vendor supplied defaults before creating new AMIs or prior to deploying new applications including but not limited to passwords simple network management protocol (SNMP) community strin gs and security configuration • Remove or disable unnecessary user accounts • Implement a single primary function per Amazon EC2 instance to keep functions that require different security levels from co existing on the same server For example implement we b servers database servers and DNS on separate servers • Enable only necessary and secure services protocols daemons etc as required for the functioning of the system Disable all non essential services because they increase the security risk exposu re for the instance as well as the entire system • Disable or remove all unnecessary functionality such as scripts drivers features subsystems EBS volumes Configure all services with security best practices in mind Enable security features for any re quired services protocols or daemons Choose services such as SSH which have built in security mechanisms for user/peer authentication encryption and data integrity authentication over less secure equivalents such as Telnet Use SSH for file transfers rather than insecure protocols like FTP Where you can’t avoid using less secure protocols and services introduce additional security layers around them such as IPSec or other virtual private network (VPN) technologies to protect the communications cha nnel at the network layer or GSS API Kerberos SSL or TLS to protect network traffic at the application layer While security governance is important for all organizations it is a best practice to enforce security policies Wherever possible configure your system security parameters to comply with your security policies and guidelines to prevent misuse For administrative access to systems and applications encrypt all non console administrative access using strong cryptographic mechanisms Use technolo gies such ArchivedAmazon Web Services AWS Security Best Practices Page 49 as SSH user and site tosite IPSec VPNs or SSL/TLS to further secure remote system management Secure Your Infrastructure This section provides recommendations for securing infrastructure services on the AWS platform Using Amazon Virtual Priva te Cloud (VPC) With Amazon Virtual Private Cloud (VPC) you can create private clouds within the AWS public cloud Each customer Amazon VPC uses IP address space allocated by customer You can use private IP addresses (as recommended by RFC 1918) for your Amazon VPCs building private clouds and associated networks in the cloud that are not directly routable to the Internet Amazon VPC provides not only isolation from other customers in the private cloud it provides layer 3 (Network Layer IP routing) isola tion from the Internet as well Table 20 lists options for protecting your applications in Amazon VPC: ArchivedAmazon Web Services AWS Security Best Practices Page 50 Table 20: Accessing resources in Amazon VPC Concern Description Recommended Protection Approach Internet only The Amazon VPC is not connected to any of your infrastructure on premises or elsewhere You might or might not have additional infrastructure residing on premises or elsewhere If you need to accept connections from Internet users you can provide inbound access by allo cating elastic IP addresses (EIPs) to only those Amazon VPC instances that need them You can further limit inbound connections by using security groups or NACLs for only specific ports and source IP address ranges If you can balance the load of traffic i nbound from the Internet you don’t need EIPs You can place instances behind Elastic Load Balancing For outbound (to the Internet) access for example to fetch software updates or to access data on AWS public services such as Amazon S3 you can use a NA T instance to provide masquerading for outgoing connections No EIPs are required Encrypt application and administrative traffic using SSL/TLS or build custom user VPN solutions Carefully plan routing and server placement in public and private subnets Use security groups and NACLs IPSec over the Internet AWS provides industry standard and resilient IPSec termination infrastructure for VPC Customers can establish IPSec tunnels from their on premises or other VPN infrastructure to Amazon VPC IPSec tunnels are established between AWS and your infrastructure endpoints Applications running in the cloud or on premises don’t require any modification and can benefit from IPSec data protection in transit immediately Establish a private IPSec connec tion using IKEv1 and IPSec using standard AWS VPN facilities (Amazon VPC VPN gateways customer gateways and VPN connections) Alternatively establish customer specific VPN software infrastructure in the cloud and on premises AWS Direct Connect witho ut IPSec With AWS Direct Connect you can establish a connection to your Amazon VPC using private peering with AWS over dedicated links without using the Internet You can opt to not use IPSec in this case subject to your data protection requirements Depending on your data protection requirements you might not need additional protection over private peering ArchivedAmazon Web Services AWS Security Best Practices Page 51 Concern Description Recommended Protection Approach AWS Direct Connect with IPSec You can use IPSec over AWS Direct Connect links for additional end to end protection See IPSec over the Internet a bove Hybrid Consider using a combination of these approaches Employ adequate protection mechanisms for each connectivity approach you use You can leverage Amazon VPC IPSec or VPC AWS Direct Connect to seamlessly integrate on premises or other hosted infrastructure with your Amazon VPC resources in a secure fashion With either approach IPSec connections protect data in transit while BGP on IPSec or AWS Direct Connect links integrate your Amazon VPC and on premises routing domains for transpar ent integration for any application even applications that don’t support native network security mechanisms Although VPC IPSec provides industry standard and transparent protection for your applications you might want to use additional levels of protect ion mechanisms such as SSL/TLS over VPC IPSec links For more information please refer to the Amazon VPC Connectivity Options whitepaper Using Security Zoning a nd Network Segmentation Different security requirements mandate different security controls It is a security best practice to segment infrastructure into zones that impose similar security controls While most of the AWS underlying infrastructure is manag ed by AWS operations and security teams you can build your own overlay infrastructure components Amazon VPCs subnets routing tables segmented/zoned applications and custom service instances such as user repositories DNS and time servers supplement t he AWS managed cloud infrastructure Usually network engineering teams interpret segmentation as another infrastructure design component and apply network centric access control and firewall rules to manage access Security zoning and network segmentation are two different concepts however: A network segment simply isolates one network from another where a security zone creates a group of system components with similar security levels with common controls ArchivedAmazon Web Services AWS Security Best Practices Page 52 On AWS you can build network segmen ts using the following access control methods: • Using Amazon VPC to define an isolated network for each workload or organizational entity • Using security groups to manage access to instances that have similar functions and security requirements; security groups are stateful firewalls that enable firewall rules in both directions for every allowed and established TCP session or UDP communications channel • Using Network Access Control Lists (NACLs) that allow stateless management of IP traffic NACLs are agnostic of TCP and UDP sessions but they allow granular control over IP protocols (for example GRE IPSec ESP ICMP) as well as control on a per source/destination IP address and port for TCP and UDP NACLs work in conjunction with s ecurity groups and can allow or deny traffic even before it reaches the security group • Using host based firewalls to control access to each instance • Creating a threat protection layer in traffic flow and enforcing all traffic to traverse the zone • Apply ing access control at other layers (eg applications and services) Traditional environments require separate network segments representing separate broadcast entities to route traffic via a central security enforcement system such as a firewall The conc ept of security groups in the AWS cloud makes this requirement obsolete Security groups are a logical grouping of instances and they also allow the enforcement of inbound and outbound traffic rules on these instances regardless of the subnet where these instances reside Creating a security zone requires additional controls per network segment and they often include: • Shared Access Control –a central Identity and Access Management (IDAM) system Note that although federation is possible this will often be separate from IAM • Shared Audit Logging –shared logging is required for event analysis and correlation and tracking security events • Shared Data Classification –see Table 1: Sample Asset Matrix Design Your ISMS to Protect Your Assets section for more information ArchivedAmazon Web Services AWS Security Best Pract ices Page 53 • Shared Management Infrastructure –various components such as anti virus/antispam systems patching systems and performance monitoring systems • Shared Secu rity (Confidentiality/Integrity) Requirements –often considered in conjunction with data classification To assess your network segmentation and security zoning requirements answer the following questions: • Do I control inter zone communication? Can I use n etwork segmentation tools to manage communications between security zones A and B? Usually access control elements such as security groups ACLs and network firewalls should build the walls between security zones Amazon VPCs by default builds inter zone isolation walls • Can I monitor inter zone communication using an IDS/IPS/DLP/SIEM/NBAD system depending on business requirements? Blocking access and managing access are different terms The porous communication between security zones mandates sophisticat ed security monitoring tools between zones The horizontal scalability of AWS instances makes it possible to zone each instance at the operating systems level and leverage host based security monitoring agents • Can I apply per zone access control rights? O ne of the benefits of zoning is controlling egress access It is technically possible to control access by resources such as Amazon S3 and Amazon SMS resources policies • Can I manage each zone using dedicated management channel/roles? Role Based Access Con trol for privileged access is a common requirement You can use IAM to create groups and roles on AWS to create different privilege levels You can also mimic the same approach with application and system users One of the new key features of Amazon VPC –based networks is support for multiple elastic network interfaces Security engineers can create a management overlay network using dual homed instances • Can I apply per zone confidentiality and integrity rules? Per zone encryption data classification and DRM simply increase the overall security posture If the security requirements are different per security zone then the data security requirements must be different as well And it is always a good policy to use different encryption options with rotating keys on each security zone AWS provides flexible security zoning options Security engineers and architects can leverage the following AWS features to build isolated security zones/segments on AWS per Amazon VPC access control: ArchivedAmazon Web Services AWS Security Best Practices Page 54 • Per subnet access control • Per security group access control • Per instance access control (host based) • Per Amazon VPC routing block • Per resource policies (S3/SNS/SMS) • Per zone IAM policies • Per zone log management • Per zone IAM users administrative users • Per zone log feed • Per zone administrative channels (roles interfaces management consoles) • Per zone AMIs • Per zone data storage resources (Amazon S3 buckets or Glacier archives) • Per zone user directories • Per zone applications/application controls With elastic cloud infrastructure an d automated deployment you can apply the same security controls across all AWS regions Repeatable and uniform deployments improve your overall security posture Strengthening Network Security Following the shared responsibility model AWS configures infr astructure components such as data center networks routers switches and firewalls in a secure fashion You are responsible for controlling access to your systems in the cloud and for configuring network security within your Amazon VPC as well as secure inbound and outbound network traffic While applying authentication and authorization for resource access is essential it doesn’t prevent adversaries from acquiring network level access and trying to impersonate authorized users Controlling access to ap plications and services based on network locations of the user provides an additional layer of security For example a webbased application with strong user authentication could also benefit from an IP address based firewall that limits source traffic to a specific range of IP addresses and ArchivedAmazon Web Services AWS Security Best Practices Page 55 an intrusion prevention system to limit security exposure and minimize the potential attack surface for the application Best practices for network security in the AWS cloud include the following: • Always use security groups: They provide stateful firewalls for Amazon EC2 instances at the hypervisor level You can apply multiple security groups to a single instance and to a single ENI • Augment security groups with Network ACLs: They are stateless but they provide fast and efficient controls Network ACLs are not instance specific so they can provide another layer of control in addition to security groups You can apply separation of duties to ACLs management and security group management • Use IPSec or AWS Direct Connec t for trusted connections to other sites Use Virtual Gateway (VGW) where Amazon VPC based resources require remote network connectivity • Protect data in transit to ensure the confidentiality and integrity of data as well as the identities of the communic ating parties • For large scale deployments design network security in layers Instead of creating a single layer of network security protection apply network security at external DMZ and internal layers • VPC Flow Logs provides further visibility as it enables you to capture information about the IP traffic going to and from network interfaces in your VPC Many of the AWS service endpoints that you interact with do not provide for native firewall functionality or access control lists AWS monitors and pr otects these endpoints with state oftheart network and application level control systems You can use IAM policies to restrict access to your resources based on the source IP address of the request Securing Periphery Systems: User Repositories DNS NTP Overlay security controls are effective only on top of a secure infrastructure DNS query traffic is a good example of this type of control When DNS systems are not properly secured DNS client traffic can be intercepted and DNS names in queries and responses can be spoofed Spoofing is a simple but efficient attack against an infrastructure that lacks basic controls SSL/TLS can provide additional protection ArchivedAmazon Web Services AWS Security Best Practices Page 56 Some AWS customers use Amazon Route 53 which is a secure DNS service If you require internal D NS you can implement a custom DNS solution on Amazon EC2 instances DNS is an essential part of the solution infrastructure and as such becomes a critical part of your security management plan All DNS systems as well as other important custom infrastru cture components should apply the following controls: Table 21: Controls for periphery system Common Control Description Separate administrative level access Implement role separation and access controls to limit access to such services often separate from access control required for application access or access to other parts of the infrastructure Monitoring alerting audit trail Log and monitor authorized and unauthorized activity Network layer access control Restri ct network access to only systems that require it If possible apply protocol enforcement for all network level access attempts (that is enforce custom RFC standards for NTP and DNS) Latest stable software with security patches Ensure that the software is patched and not subject to any known vulnerabilities or other risks Continuous security testing (assessments) Ensure that the infrastructure is tested regularly All other security controls processes in place Make sure the periphery systems follow your information security management system (ISMS) best practices in addition to service specific custom security controls In addition to DNS other infrastructure services might require specific controls Centralized access control is es sential for managing risk The IAM service provides rolebased identity and access management for AWS but AWS does not provide end user repositories like Active Directory LDAP or RADIUS for your operating systems and applications Instead you establish user identification and authentication systems alongside Authentication Authorization Accounting (AAA) servers or sometimes proprietary database tables All identity and access management servers for the purposes of user platforms and applications are c ritical to security and require special attention ArchivedAmazon Web Services AWS Security Best Practices Page 57 Time servers are also critical custom services They are essential in many security related transactions including log time stamping and certificate validation It is important to use a centralized time s erver and synchronize all systems with the same time server The Payment Card Industry (PCI) Data Security Standard (DSS) proposes a good approach to time synchronization: • Verify that time synchronization technology is implemented and kept current • Obtain and review the process for acquiring distributing and storing the correct time within the organization and review the time related system parameter settings for a sample of system components • Verify that only designated central time servers receive tim e signals from external sources and that time signals from external sources are based on International Atomic Time or Universal Coordinated Time (UTC) • Verify that the designated central time servers peer with each other to keep accurate time and that ot her internal servers receive time only from the central time servers • Review system configurations and time synchronization settings to verify that access to time data is restricted to only personnel who have a business need to access time data • Review sys tem configurations and time synchronization settings and processes to verify that any changes to time settings on critical systems are logged monitored and reviewed • Verify that the time servers accept time updates from specific industry accepted exter nal sources (This helps prevent a malicious individual from changing the clock) (You have the option of receiving those updates encrypted with a symmetric key and you can create access control lists that specify the IP addresses of client machines that will be updated (This prevents unauthorized use of internal time servers) Validating the security of custom infrastructure is an integral part of managing security in the cloud Building Threat Protection Layers Many organizations consider layered securi ty to be a best practice for protecting network infrastructure In the cloud you can use a combination of Amazon VPC implicit firewall rules at the hypervisor layer alongside network access control lists security ArchivedAmazon Web Services AWS Secur ity Best Practices Page 58 groups host based firewalls and IDS/I PS systems to create a layered solution for network security While security groups NACLs and host based firewalls meet the needs of many customers if you’re looking for defense in depth you should deploy a network level security control appliance and you should do so inline where traffic is intercepted and analyzed prior to being forwarded to its final destination such as an application server Figure 6: Layered Network Defense in the Cloud Examples of inline threat protec tion technologies include the following: • Third party firewall devices installed on Amazon EC2 instances (also known as soft blades) • Unified threat management (UTM) gateways • Intrusion prevention systems • Data loss management gateways • Anomaly detection gateways • Advanced persistent threat detection gateways ArchivedAmazon Web Services AWS Security Best Practices Page 59 The following key features in the Amazon VPC infrastructure support deploying threat protection layer technologies: • Support for Multiple Layers of Load Balancers: When you use threat protecti on gateways to secure clusters of web servers application servers or other critical servers scalability is a key issue AWS reference architectures underline deployment of external and internal load balancers for threat management and internal server lo ad distribution and high availability You can leverage Elastic Load Balancing or your custom load balancer instances for your multi tiered designs You must manage session persistence at the load balancer level for stateful gateway deployments • Support fo r Multiple IP Addresses: When threat protection gateways protect a presentation layer that consists of several instances (for example web servers email servers application servers) these multiple instances must use one security gateway in a many toone relationship AWS provides support for multiple IP addresses for a single network interface • Support for Multiple Elastic Network Interfaces (ENIs): Threat protection gateways must be dual homed and in many cases depending on the complexity of the networ k must have multiple interfaces Usingthe concept of ENIs AWS supports multiple network interfaces on several different instance types which makes it possible to deploy multi zone security features Latency complexity and other architectural constrain ts sometimes rule out implementing an inline threat management layer in which case you can choose one of the following alternatives • A distributed threat protection solution : This approach installs threat protection agents on individual instances in the c loud A central threat management server communicates with all host based threat management agents for log collection analysis correlation and active threat response purposes • An overlay network threat protection solution : Build an overlay network on top of your Amazon VPC using technologies such as GRE tunnels vtun interfaces or by forwarding traffic on another ENI to a centralized network traffic analysis and intrusion detection system which can provide active or passive threat response ArchivedAmazon Web Services AWS Security Best Practices Page 60 Test Securi ty Every ISMS must ensure regular reviews of the effectiveness of security controls and policies To guarantee the efficiency of controls against new threats and vulnerabilities customers need to ensure that the infrastructure is protected against attacks Verifying existing controls requires testing AWS customers should undertake a number of test approaches: • External Vulnerability Assessment: A third party evaluates system vulnerabilities with little or no knowledge of the infrastructure and its componen ts; • External Penetration Tests: A third party with little or no knowledge of the system actively tries to break into it in a controlled fashion • Internal Gray/White box Review of Applications and Platforms : A tester who has some or full knowledge of the s ystem validates the efficiency of controls in place or evaluates applications and platforms for known vulnerabilities The AWS Acceptable Use Policy outlines permitted and prohibited behavior in the AWS cloud and defines security violations and network abuse AWS supports both inbound and outbound penetration testing in the cloud although you must request permission to conduct penetration tests For more information see the Amazon Web Services Acceptable Use Policy To request penetration testing for your resources complete and submit the AWS Vulnerability Penetration Testing R equest Form You must be logged into the AWS Management Console using the credentials associated with the instances you want to test or the form will not pre populate correctly For third party penetration testing you must fill out the form yourself and then notify the third parties when AWS grants approval The form includes information about the instances to be tested the expected start and end dates and times of the tests and requires you to read and agree to the terms and conditions specific to pene tration testing and to the use of appropriate tools for testing AWS policy does not permit testing of m1small or t1micro instance types After you submit the form you will receive a response confirming receipt of the request within one business day If you need more time for additional testing you can reply to the authorization email asking to extend the test period Each request is subject to a separate approval process ArchivedAmazon Web Services AWS Security Best Practices Page 61 Managing Metrics and Improvement Measuring control effectiveness is an integral p rocess to each ISMS Metrics provide visibility into how well controls are protecting the environment Risk management often depends on qualitative and quantitative metrics Table 22 outlines measurement and improvement best practices: Table 22: Measuring and improving metrics Best Practice Improvement Monitoring and reviewing procedures and other controls • Promptly detect errors in the results of processing • Promptly identify attempted and successful security breaches and incidents • Enable management to determine whether the security activities delegated to people or implemented by information technology are performing as expected • Help detect security events and thereby prevent security incidents by the use of indicators • Determine whether the actions taken to resolve a breach of security were effective Regular reviews of the effectiveness of the ISMS • Consider results from security audits incidents and effectiveness measurements; and suggestions and feedback from all interested parties • Ensure that the ISMS meets the policy and objectives • Review security controls Measure controls effectiveness • Verify that security requirements have been met Risk assessments reviews at planned intervals • Review the residual risks and t he identified acceptable levels of risks taking into account: • Changes to the organization technology business objectives and processes identified threats • Effectiveness of the implemented controls • External events such as changes to the legal or regulat ory environment changed contractual obligations and changes in social climate Internal ISMS audits • First party audits (internal audits) are conducted by or on behalf of the organization itself for internal purposes ArchivedAmazon Web Services AWS Security Best Practices Page 62 Best Practice Improvement Regular management reviews • Ensure that the scope remains adequate • Identify improvements in the ISMS process Update security plans • Take into account the findings of monitoring and reviewing activities • Record actions and events that could affect the ISMS effectiveness or performance Mitigating and Protecting Against DoS & DDoS Attacks Organizations running Internet applications recognize the risk of being the subject of Denial of Service (DoS) or Distributed Denial of Service (DDoS) attacks by competitors activists or i ndividuals Risk profiles vary depending on the nature of business recent events the political situation as well as technology exposure Mitigation and protection techniques are similar to those used on premises If you’re concerned about DoS/DDoS attac k protection and mitigation we strongly advise you to enroll in AWS Premium Support services so that you can proactively and reactively involve AWS support services in the process of mitigating attacks or containing ongoing incidents in your environment on AWS Some services such as Amazon S3 use a shared infrastructure which means that multiple AWS accounts access and store data on the same set of Amazon S3 infrastructure components In this case a DoS/DDoS attack on abstracted services is likely to affect multiple customers AWS provides both mitigation and protection controls for DoS/DDoS on abstracted services from AWS to minimize the impact to you in the event such an attack You are not required to provide additional DoS/DDoS protection of such services but we do advise that you follow best practices outlined in this whitepaper Other services such as Amazon EC2 use shared physical infrastructure but you are expected to manage the operating system platform and customer data For such servic es we need to work together to provide for effective DDoS mitigation and protection AWS uses proprietary techniques to mitigate and contain DoS/DDoS attacks to the AWS platform To avoid interference with actual user traffic though and following the shared responsibility model AWS does not provide mitigation or actively block network ArchivedAmazon Web Services AWS Security Best Practices Page 63 traffic affecting individual Amazon EC2 instances: only you can determine whether excessive traffic is expected and benign or part of a DoS/DDoS attack While a number of techniques can be used to mitigate DoS/DDoS attacks in the cloud we strongly recommend that you establish a security and performance baseline that captures system parameters under normal circumstances potentially also considering daily weekly annual o r other patterns applicable to your business Some DoS/DDoS protection techniques such as statistical and behavioral models can detect anomalies compared to a given baseline normal operation pattern For example a customer who typically expects 2000 co ncurrent sessions to their website at a specific time of day might trigger an alarm using Amazon CloudWatch and Amazon SNS if the current number of concurrent sessions exceeds twice that amount (4000) Consider the same components that apply to on premise s deployments when you establish your secure presence in the cloud Table 23 outlines common approaches for DoS/DDoS mitigation and protection in the cloud Table 23: Techniques for mitigation and protection from DoS/DDoS attacks Technique Description Protection from DoS/DDoS Attacks Firewalls: Security groups network access control lists and host based firewalls Traditional firewall techniques limit the attack surface for potential attackers and deny traffic to and from the source of destination of attack • Manage the list of allowed destination servers and services (IP addresses & TCP/UDP ports) • Manage the list of allowed sources of traffic protocols • Explicitly deny access temporarily or permanently from specific IP addresses • Manage the list of allowed Web application firewalls (WAF) Web application firewalls provide deep packet inspection for web traffic • Platform and application specific attacks • Protocol sanity attacks • Unauthorized user access Host based or inline IDS/IPS systems IDS/IPS systems can use statistical/behavioral or signature based algorithms to detect and contain network attacks and Trojans • All types of attacks ArchivedAmazon Web Services AWS Security Best Practices Page 64 Technique Description Protection from DoS/DDoS Attacks Traffic shaping/rate limiting Often DoS/DDoS attacks deplete network and system resources Rate limiting is a good technique for protecting scarce resources from overconsumption • ICMP flooding • Application request flooding Embryonic session limits TCP SYN flooding attacks can take place in both simple and distributed form In either case if you have a baseline of the system you can detect considerable deviations from the norm in the number of half open (embryonic) TCP sessions and drop any further TCP SYN packets from the specific sources TCP SYN flooding Along with conven tional approaches for DoS/DDoS attack mitigation and protection the AWS cloud provides capabilities based on its elasticity DoS/DDoS attacks are attempts to deplete limited compute memory disk or network resources which often works against on premise s infrastructure By definition however the AWS cloud is elastic in the sense that new resources can be employed on demand if and when required For example you might be under a DDoS attack from a botnet that generates hundreds of thousands of request s per second that are indistinguishable from legitimate user requests to your web servers Using conventional containment techniques you would start denying traffic from specific sources often entire geographies on the assumption that there are only att ackers and no valid customers there But these assumptions and actions result in a denial of service to your customers themselves In the cloud you have the option of absorbing such an attack Using AWS technologies like Elastic Load Balancing and Auto Sc aling you can configure the web servers to scale out when under attack (based on load) and shrink back when the attack stops Even under heavy attack the web servers could scale to perform and provide optimal user experience by leveraging cloud elastici ty By absorbing the attack you might incur additional AWS service costs; but sustaining such an attack is so financially challenging for the attacker that absorbed attacks are unlikely to persist You could also use Amazon CloudFront to absorb DoS/DDoS flooding attacks AWS WAF integrates with AWS CloudFront to help protect your web applications from ArchivedAmazon Web Services AWS Security Best Practices Page 65 common web exploits that could affect application availability compromise security or consume exce ssive resources Potential attackers trying to attack content behind CloudFront are likely to send most or all requests to CloudFront edge locations where the AWS infrastructure would absorb the extra requests with minimal to no impact on the back end cust omer web servers Again there would be additional AWS service charges for absorbing the attack but you should weigh this against the costs the attacker would incur in order to continue the attack as well In order to effectively mitigate contain and ge nerally manage your exposure to DoS/DDoS attacks you should build a layer defense model as outlined elsewhere in this document Manage Security Monitoring Alerting Audit Trail and Incident Response The shared responsibility model requires you to monit or and manage your environment at the operating system and higher layers You probably already do this on premises or in other environments so you can adapt your existing processes tools and methodologies for use in the cloud For extensive guidance on security monitoring see the ENISA Procure Secure whitepaper which outlines the concepts of continuous security monitoring in the cloud (see Further Reading ) Security monitoring starts with answering the following questions : • What parameters should we measure? • How should we measure them? • What are the thresholds for these parameters? • How will escalation processes work? • Where will data be kept? Perhaps the most important question you must answer is “What do I need to log?” We recommend configuring the following areas for logging and analysis: • Actions taken by any individual with root or administrative privileges • Access to all audit trails ArchivedAmazon Web Services AWS Security Best Practices Page 66 • Invalid logical access attempts • Use of identification and authentication mechanisms • Initia lization of audit logs • Creation and deletion of system level objects When you design your log file keep the considerations in Table 24 in mind: Table 24: Log file considerations Area Consideration Log collection Note how log files are collected Often operating system application or third party/middleware agents collect log file information Log transport When log files are centralized transfer them to the central location in a secure reliable and timely fashion Log s torage Centralize log files from multiple instances to facilitate retention policies as well as analysis and correlation Log taxonomy Present different categories of log files in a format suitable for analysis Log analysis/ correlation Log files provide security intelligence after you analyze them and correlate events in them You can analyze logs in real time or at scheduled intervals Log protection/ security Log files are sensitive Protect them through network control identity and access management encryption data integrity authentication and tamper proof time stamping You might have multiple sources of security logs Various network components such as firewalls IDP DLP AV systems the operating system platforms and applications will generate log files Many will be related to security and those need to be part of the log file strategy Others which are not related to security are better excluded from the strategy Logs should include all user activities exception s and security events and you should keep them for a predetermined time for future investigations To determine which log files to include answer the following questions: • Who are the users of the cloud systems? How do they register how do they authenti cate how are they authorized to access resources? • Which applications access cloud systems? How do they get credentials how do they authenticate and how they are authorized for such access? ArchivedAmazon Web Services AWS Security Best Practices Page 67 • Which users have privileged access (administrative level access) to AWS infrastructure operating systems and applications? How do they authenticate how are they authorized for such access? Many services provide built in access control audit trails (for example Amazon S3 and Amazon EMR provide such logs) but in some cases your business requirements for logging might be higher than what’s available from the native service log In such cases consider using a privilege escalation gateway to manage access control logs and authorization When you use a privilege escalat ion gateway you centralize all access to the system via a single (clustered) gateway Instead of making direct calls to the AWS infrastructure your operating systems or applications all requests are performed by proxy systems that act as trusted interme diaries to the infrastructure Often such systems are required to provide or do the following: • Automated password management for privileged access: Privileged access control systems can rotate passwords and credentials based on given policies automatically using built in connectors for Microsoft Active Directory UNIX LDAP MYSQL etc • Regularly run least privilege checks using AWS IAM user Access Advisor and AWS IAM user Last Used Access Keys • User authentication on the front end and delegated access to services from AWS on the back end: Typically a website that provides single sign on for all users Users are assigned access privileges based on their authorization profiles A common approach is using token based authentication for the website and acquir ing click through access to other systems allowed in the user’s profile • Tamper proof audit trail storage of all critical activities • Different sign on credentials for shared accounts: Sometimes multiple users need to share the same password A privilege e scalation gateway can allow remote access without disclosing the shared account • Restrict leapfrogging or remote desktop hopping by allowing access only to target systems • Manage commands that can be used during sessions For interactive sessions like SSH or appliance management or AWS CLI such solutions can enforce policies by limiting the range of available commands and actions ArchivedAmazon Web Services AWS Security Best Practices Page 68 • Provide audit trail for terminals and GUI based sessions for compliance and security related purposes • Log everything and aler t based on given threshold for the policies Using Change Management Logs By managing security logs you can also track changes These might include planned changes which are part of the organization’s change control process (sometimes referred to as MACD –Move/Add/Change/Delete) ad hoc changes or unexpected changes such as incidents Changes might occur on the infrastructure side of the system but they might also be related to other categories such as changes in code repositories gold image/applicati on inventory changes process and policy changes or documentation changes As a best practice we recommend employing a tamper proof log repository for all the above categories of changes Correlate and interconnect change management and log management sy stems You need a dedicated user with privileges for deleting or modifying change logs; for most systems devices and applications change logs should be tamper proof and regular users should not have privileges to manage the logs Regular users should be unable to erase evidence from change logs AWS customers sometimes use file integrity monitoring or change detection software on logs to ensure that existing log data cannot be changed without generating alerts while adding new entries does not generate a lerts All logs for system components must be reviewed at the minimum on a daily basis Log reviews must include those servers that perform security functions such as intrusion detection system (IDS) and authentication authorization and accounting proto col (AAA) servers (for example RADIUS) To facilitate this process you can use log harvesting parsing and alerting tools Managing Logs for Critical Transactions For critical applications all Add Change/Modify and Delete activities or transactions must generate a log entry Each log entry should contain the following information: • User identification information • Type of event • Date and time stamp ArchivedAmazon Web Services AWS Security Best Practices Page 69 • Success or failure indication • Origination of event • Identity or name of affected data system component or resource Protecting Log Information Logging facilities and log information must be protected against tampering and unauthorized access Administrator and operator logs are often targets for erasing trails of activities Common controls for pr otecting log information include the following: • Verifying that audit trails are enabled and active for system components • Ensuring that only individuals who have a job related need can view audit trail files • Confirming that current audit trail files are pro tected from unauthorized modifications via access control mechanisms physical segregation and/or network segregation • Ensuring that current audit trail files are promptly backed up to a centralized log server or media that is difficult to alter • Verifying that logs for external facing technologies (for example wireless firewalls DNS mail) are offloaded or copied onto a secure centralized internal log server or media • Using file integrity monitoring or change detection software for logs by examining syste m settings and monitored files and results from monitoring activities • Obtaining and examining security policies and procedures to verify that they include procedures to review security logs at least daily and that follow up to exceptions is required • Verify ing that regular log reviews are performed for all system components • Ensuring that security policies and procedures include audit log retention policies and require audit log retention for a period of time defined by the business and compliance requiremen ts ArchivedAmazon Web Services AWS Security Best Practices Page 70 Logging Faults In addition to monitoring MACD events monitor software or component failure Faults might be the result of hardware or software failure and while they might have service and data availability implications they might not be related to a security incident Or a service failure might be the result of deliberate malicious activity such as a denial of service attack In any case faults should generate alerts and then you should use event analysis and correlation techniques to determine t he cause of the fault and whether it should trigger a security response Conclusion AWS Cloud Platform provides a number of important benefits to modern businesses including flexibility elasticity utility billing and reduced time to market It provid es a range of security services and features that that you can use to manage security of your assets and data in the AWS While AWS provides an excellent service management layer around infrastructure or platform services businesses are still responsible for protecting the confidentiality integrity and availability of their data in the cloud and for meeting specific business requirements for information protection Conventional security and compliance concepts still apply in the cloud Using the various best practices highlighted in this whitepaper we encourage you to build a set of security policies and processes for your organization so you can deploy applications and data quickly and securely Contributors Contributors to this document include : • Dob Todorov • Yinal Ozkan Further Reading For additional information see: • Amazon Web Services: Overview of Security Processes • Amazon Web Services Risk and Compliance Whitepaper ArchivedAmazon Web Services AWS Security Best Practices Page 71 • Amazon VPC Network Connectivity Options • AWS SDK support for Amazon S3 client side encryption • Amazon S3 Default Encryption f or S3 Buckets • AWS Security Partner Solutions • Identity Federation Sample Application for an Active Directory Use Case • Single Sign on to Amazon EC2 NET Applications from an On Premises Windows Domain • Authenticating Users of AWS Mobile Applications with a Token Vending Machine • Client Side Data Encryption with the AWS SDK for Java and Amazon S3 • Amazon Web Services Acceptable Use Policy • ENISA Procure Secure: A Guide to Monitoring of Security Service Levels in Cloud Contracts • The PCI Data Security Standard • ISO/IEC 27001:2013 Document Revisions Date Description August 2016 First publication
General
CrossDomain_Solutions_on_AWS
ArchivedCrossDomain Solutions on AWS December 2016 This paper has been archived For the latest technical content see https://docsawsamazoncom/whitepapers/latest/cross domainsolutions/welcomehtml Archived© 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedContents Introduction 1 What is a CrossDomain Solution? 1 OneWay Transfer Device 1 Multidomain Data Guard 2 Traditional Deployment 2 How Is a CrossDomain Solution Different from Other Security Appliances? 3 When is a CrossDomain Solution Required? 4 Connecting OnPremises Infrastructure 4 Amazon VPC 4 AWS Direct Connect 5 Amazon EC2 5 Amazon S3 5 AWS Advantages for Secure Workloads 6 Cost 6 Elasticity 6 PurposeBuilt Infrastructure 6 Auditability 6 Security and Governance 7 Sample Architectures 7 Deploying a CDS via the Internet 7 Deploying a CDS via AWS Direct Connect 8 Deploying a CDS across Multiple Regions 9 Deploying a CDS in a Colocation Environment 11 Conclusion 11 Contributors 12 Further Reading 12 ArchivedNotes 12 ArchivedAbstract Many corporations government entities and institutions maintain multiple security domains as part of their information technology (IT) infrastructure For the purposes of this document a security domain is an environment with a set of resources accessible only by users or entities who have permitted access to those resources The resources are likely to include the resource’s network fabric as defined by the security domain’s policy Some organization’s users need to interact with multiple domains simultaneously or a system or user within one security domain needs to communicate directly or obtain data from a system or user in a separate security domain For security domains with highly sensitive data a crossdomain solution (CDS) can be deployed to allow data transfer between security domains while ensuring integrity of the domain’s security perimeter ArchivedAmazon Web Services – CrossDomain Solutions: OnPremises to AWS Page 1 Introduction To control access across security domains it’s common to employ a specialized hardware solution such as a crossdomain solution (CDS ) to manage and control the interactions between two security boundaries When security domains extend across data centers or expand into the cloud you can encounter additional challenges when including the hardware solution you want in your architecture You are not limited to any particular vendor solution to deploy a CDS on the AWS Cloud However one challenge is that you cannot place your own hardware within an AWS data center This requirement is part of the AWS commitment to maintain security within AWS data centers This whitepaper provides best practices for designing hybrid architectures where AWS services are incorporated as one or more security domains within a multidomain environment What is a CrossDomain Solution? The Committee on National Security Systems (CNSS) defines a CDS as a form of controlled interface that enables manual or automatic access or transfer of information between different security domains Two types of CDS are discussed in this whitepaper a o neway transfer (OWT) device and a multidomain data guard OneWay Transfer Device An OWT device allows data to flow in a single direction from one security domain to another A common implementation of an OWT device uses a fiber optic cable To ensure data flows only in one direction the OWT uses a single optical transmitter The optical transmitter is placed on only one end of the fiber optic cable (eg data producer) and the optical receiver is placed on the opposite end (eg data consumer) OWT devices are often referred to as diodes due to their ability to transfer data only in one direction similar to the semiconductor of the same name ArchivedAmazon Web Services – CrossDomain Solutions: OnPremises to AWS Page 2 Multidomain Data Guard A multidomain data guard enables bidirectional data flow between security domains A common implementation of a multidomain data guard is a single server running a trusted hardened operating system with multiple network interface cards (NICs) Each NIC provides a physical demarcation for a single security domain The multidomain data guard inspects all data transmitted between domains to ensure the data remains in compliance with a unique rule set that is specific to the guard’s deployment Traditional Deployment Figure 1 shows a traditional crossdomain solution deployment between two security domains Security Domain “A” is connected to Security Domain “B” using a CDS If the CDS is an OWT device resources deployed in Network “A” can communicate to resources deployed in Network “B” by sending data via the CDS If instead the CDS is a multi domain data guard resources in either security domain can communicate with the other security domain by sending data via the CDS In the following example the CDS is administrated and also physically located within the protections of Security Domain “B” ArchivedAmazon Web Services – CrossDomain Solutions: OnPremises to AWS Page 3 Figure 1: Traditional CDS deployment How Is a CrossDomain Solution Differ ent from Other Security Appliances? A CDS differs from other security appliances such as firewalls web application firewalls (WAFs) and intrusion detection or prevention systems In addition to providing physical network and logical isolation between domains cross domain solutions offer additional security mechanisms such as virus scanning auditing and logging and deep content inspection in a single solution In Security Domain “A” Network “A” Security Domain “B” Network “B” Cross Domain Solution ArchivedAmazon Web Services – CrossDomain Solutions: OnPremises to AWS Page 4 combination when the CDS is included in a larger security program these capabilities help prevent both exploitation and data leakage When is a CrossDomain Solution Required? A business decision to employ a CDS should evaluate the high cost of ownership involved with integration procurement and maintenance Be aware that a high degree of customization is often required for each individual CDS deployment You would often deploy a CDS due to regulatory or policy requirements or in situations where a data breach would be catastrophic to your organization Because of these reasons the CDS is an integral component of the architecture and may even be required to achieve an Authority to Operate (ATO) from your organization’s security and compliance program Once an ATO is achieved it can be cumbersome to make changes to a CDS configuration (eg alter the message rule set) without affecting the ATO ’s approval If these drawbacks outweigh the additional security provided by a CDS you should consider other options such as WAF s Connecting OnPremises Infrastructure AWS provides service offerings to help you connect your existing onpremises infrastructures The following sections describe some o f the key services that AWS offers including: Amazon Virtual Private Cloud (Amazon VPC) AWS Direct Connect Amazon Elastic Compute Cloud (Amazon EC2 ) and Amazon Simple Storage Service (Amazon S3) Amazon VPC Amazon VPC lets you provision a logically isolated section of your AWS environment so that you can launch resources in a virtual network you define You have complete control over your virtual networking environment including the selection of your own IP address range creation of subnets and configuration of route tables and network gateways The network configuration for a VPC is ArchivedAmazon Web Services – CrossDomain Solutions: OnPremises to AWS Page 5 easily customized using multiple layers of security including security groups and network access control lists The security layers control access to Amazon EC2 instances in each subnet Additionally you can create a hardware Virtual Private Network (VPN) connection between your corporate data center and your VPC and leverage AWS as an extension of your corporate data center AWS Direct Connect Using Direct Connect you can establish private connectivity between AWS and your data center office or colocation environment Direct Connect enables you to establish a dedicated network connection between your network and one of the Direct Connect locations Using industry standard 8021q VLANs this dedicated connection can be partitioned into multiple virtual interfaces This enables you to use the same connection to access public resources such as objects stored in Amazon S3 using public IP address space and private resources such as Amazon EC2 instances running within Amazon VPC using private IP address space while maintaining network separation between the public and private environments You can reconfigure virtual interfaces at any time to meet your changing needs Amazon EC2 Amazon EC2 is a web service that provides resizable compute capacity in the cloud It provides you with complete control of your computing resources and lets you run on Amazon’s proven computing environment Amazon S3 Amazon S3 provides costeffective object storage for a wide variety of use cases including cloud applications content distribution backup and archiving disaster recovery and big data analytics Objects stored in Amazon S3 can be protected in transit by using SSL or clientside encryption Data at rest in Amazon S3 can be protected by using serverside encryption (you request Amazon S3 to encrypt your object before saving it on disks in its data centers and decrypt it when you download the objects) and/or using clientside encryption (you encrypt data clientside and then upload the data to Amazon S3) Using clientside encryption you manage the encryption process the encryption keys and related tools ArchivedAmazon Web Services – CrossDomain Solutions: OnPremises to AWS Page 6 AWS Advantages for Secure Workloads The AWS Cloud provides several advantages if you want to deploy secure workloads using a CDS Cost Pay only for the storage and compute consumed for your workloads Amazon S3 offers multiple storage classes you can use to control the cost of storage objec ts based on the frequency and availability required at the object level Eliminate the costs associated with data duplication data fragmentation system maintenance and upgrades Provision compute resources for specific jobs and stop paying for the comp ute resources when the jobs are complete Elasticity Scale as workload volumes increase and decrease paying only for what you use Eliminate large capital expenditures by no longer guessing what levels of storage and compute are required for your workloads Scaling resources is not limited to just meeting demand Workload owners can also leverage the scalability value of AWS by scaling up compute resources for timesensitive jobs PurposeBuilt Infrastructure You tailor AWS purposebuilt tools to your requirements and scaling and audit objectives in addition to supporting realtime verification and reporting through the use of internal tools such as AWS CloudTrail1 AWS Config2 and Amazon CloudWatch3 These tools are built to help you maximize the protection of your services data and applications This means as an AWS customer you can spend less time on routine security and audit tasks and focus on proactive measures that can continue to enhance security and audit capabilities of your AWS environment Auditability AWS manages the underlying infrastructure and you manage the security of anything you deploy in AWS As a modern platform AWS enables you to ArchivedAmazon Web Services – CrossDomain Solutions: OnPremises to AWS Page 7 formalize the design of security as well as audit controls through reliable automated and verifiable technical and operational processes that are built into every AWS customer account The cloud simplifies system use for administrators and those running IT and makes your AWS environment much simpler to audit sample testing as AWS can shift audits toward a 100 percent verification versu s traditional sample testing Security and Governance AWS Compliance enables you to understand the robust controls in place at AWS to maintain security and data protection in the cloud As systems are built on top of AWS Cloud infrastructure compliance responsibilities are shared By tying together governancefocused auditfriendly service features with applicable compliance or audit standards AWS Compliance enablers build on traditional programs This helps you establish and operate in an AWS security control environment The IT infrastructure that AWS provides is designed and managed in alignment with security best practices and numerous security accreditations Sample Architectures You can set up your CDS in many ways The following examples describe some of the more common architectures in use Deploying a CDS via the Internet Figure 2 shows two onpremises customer networks that are connected by a CDS using the traditional deployment as shown earlier in Figure 1 In this configuration Security Domain “A” is extended to provide connectivity to an Amazon VPC in the AWS Cloud while Security Domain “B” exist s solely within the customer’s data center ArchivedAmazon Web Services – CrossDomain Solutions: OnPremises to AWS Page 8 Figure 2: Deploying a CDS via the Internet The customer is using the Internet as a WAN to connect to the Amazon VPC A secure IPSEC tunnel encapsulates data crossing the Internet betwee n on premises infrastructure and the customer’s VPC Additional security mechanisms such as a WAF or an intrusion detection system (IDS) can be deployed within Security Domain “A” for added protection from Internet facing systems Because Amazon VPC is a n extension of Security Domain “A” Amazon EC2 instances launched within Amazon VPC can communicate with resources in Security Domain “B” via the CDS Deploying a CDS via AWS Direct Connect Figure 3 shows a similar deployment to Figure 2 b ut Direct Connect is used instead of the Internet to provide the WAN connectivity for extending Security Domain “A” to Amazon VPC Internet Secure IPSEC Tunnel Security Domain “A” Extension Virtual Private Cloud AWS Region VPC subnet 1 Availability Zone A VPC subnet 2 Availability Zone B EC2 Instances EC2 Instances Virtual Private Gateway Security Domain “A” Customer Data Center Security Domain “B” Customer Gateway Network “A” Hardware Network “B” Hardware Cross Domain Solution ArchivedAmazon Web Services – CrossDomain Solutions: OnPremises to AWS Page 9 Figure 3: Deploying a CDS via Direct Connect Direct Connect gives you greater control and visibility of the WAN network path required to connect to Amazon VPC Using Direct Connect also reduces the threat vector posed by the Internet All data flowing between your data center and AWS Regions is doing so across your procured communication links Deploying a CDS across Multiple Regions Figure 4 shows two individual security domains connected to two separate AWS Regions As shown earlier in Figure 3 the security domains are extended by using a combination of Direct Connect and a secure IPSEC VPN tunnel All data flowing between the security domains flows from AWS to the customer’s data center first where it is inspected by the CDS before flowing back to AWS Security Domain “A” AWS Direct Connect colocation environment Security Domain “A” Extension Virtual Private Cloud AWS Region Customer Data Center 8021q VLAN Customer WAN Secure IPSEC Tunnel VPC subnet 1 Availability Zone A VPC subnet 2 Availability Zone B EC2 Instances EC2 Instances Security Domain “B” Customer Gateway Virtual Private Gateway Network “A” Hardware Network “B” Hardware Cross Domain Solution ArchivedAmazon Web Services – CrossDomain Solutions: OnPremises to AWS Page 10 Figure 4: Deploying a CDS across multiple regions You should implement a multiregion deployment when the unique capabilities of an individual AWS Region apply to only a single security domain For example an entity might choose to provision an Amazon Redshift data warehouse in one of the AWS Regions in the European Union (EU) to comply with data locality requirements while also maintaining a production data processing cluster in a USbased region to comply with FedRamp requirements Even though these two systems are deployed in separate geographic locations to comply with separate compliance programs and regulations they still might have a requirement to communicate and share an approved subset of data Deploying a CDS between these two security domains might be an acceptable way to share data while maintaining the integrity of the security domain’s boundary AWS Direct Connect colocation environment Security Domain “A” Extension Virtual Private Cloud AWS Region “A” 8021q VLAN Customer WAN “A” Secure IPSEC Tunnel VPC subnet EC2 Instances Virtual Private Gateway AWS Direct Connect colocation environment Security Domain “B” Extension Virtual Private Cloud AWS Region “B” 8021q VLAN Customer WAN “B” Secure IPSEC Tunnel VPC subnet EC2 Instances Virtual Private Gateway Security Domain “A” Customer Data Center Security Domain “B” Customer Gateway Network “A” Hardware Network “B” Hardware Cross Domain Solution Customer Gateway ArchivedAmazon Web Services – CrossDomain Solutions: OnPremises to AWS Page 11 Deploying a CDS in a Colocation Environment Figure 5 depicts an additional potential configuration using space at colocation environments In Figure 5 the CDS is still deployed in a customercontrolled area that is leased from the colocation facility provider Figure 5 shows a fully off premises implementation that includes a CDS Figure 5: Deploying a CDS in a colocation environment Conclusion Organizations with workloads across multiple security domains can leverage all the benefits that AWS services offer by using Direct Connect VPN crossdomain hardware and a colocation Organizations can select the hardware needed to meet their security domain transfer requirements and extend resources that live in other AWS Regions or onpremises locations In addition to the ability to connect resources across security domains AWS offers a wide variety of tools AWS Direct Connect colocation environment Security Domain “A” Extension Virtual Private Cloud AWS Region “A” 8021q VLAN Customer WAN “A” Secure IPSEC Tunnel VPC subnet EC2 Instances Virtual Private Gateway AWS Direct Connect colocation environment Security Domain “B” Extension Virtual Private Cloud AWS Region “B” 8021q VLAN Customer WAN “B” Secure IPSEC Tunnel VPC subnet EC2 Instances Virtual Private Gateway Customer WAN “A” Security Domain “A” Customer Data Center Security Domain “B” Customer Gateway Network “A” Hardware Network “B” Hardware Cross Domain Solution Customer Gateway ArchivedAmazon Web Services – CrossDomain Solutions: OnPremises to AWS Page 12 that you and your organization can leverage to meet security and compliance requirements of workloads hosted within AWS Contributors The following individuals and organizations contributed to this document:  Andrew Lieberthal Solutions Architect AWS Public Sector SalesVar Further Reading For additional help please consult the following sources:  Amazon VPC Network Connectivity Options4  AWS Security Best Practices5  Intro to AWS Security6  Overview of AWS7 Notes 1 https://awsamazoncom/cloudtrail/ 2 http://awsamazoncom/config 3 http://awsamazoncom/cloudwatch 4http://mediaamazonwebservicescom/AWS_Amazon_VPC_Connectivity_Opti onspdf 5 http://d0awsstaticcom/whitepapers/awssecuritybestpracticespdf 6 https://d0awsstaticcom/whitepapers/Security/Intro_to_AWS_Securitypdf 7 http://d0awsstaticcom/whitepapers/awsoverviewpdf
General
Introduction_to_AWS_Security_Processes
ArchivedIntroduction to AWS Security Processes June 2016 THIS PAPER HAS BEEN ARCHIVED For the latest technical content see https://awsamazoncom/architecture/securityidentitycomplianceArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 2 of 45 © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’ current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’ products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates su ppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 3 of 45 Table of Contents Introduction 5 Shared Security Responsibility Model 5 AWS Security Responsibilities 6 Customer Security Responsibilities 7 AWS Global Security Infrastructure 7 AWS Compliance Programs 8 Physical and Environmental Security 9 Fire Detection and Suppression 9 Power 9 Climate and Temperature 9 Management 10 Storage Device Dec ommissioning 10 Business Continuity Management 10 Availability 10 Incident Response 10 Company Wide Executive Review 11 Communication 11 AWS Access 11 Account Review and Audit 11 Back ground Checks 12 Credentials Policy 12 Secure Design Principles 12 Change Management 12 Software 12 Infrastructure 13 AWS Account Security Features 13 AWS Credentials 14 Passwords 15 AWS Multi Factor Authentication (AWS MFA) 15 Access Keys 16 Key Pairs 17 X509 Certificates 18 Individual User Accounts 18 ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 4 of 45 Secu re HTTPS Access Points 19 Security Logs 19 AWS Trusted Advisor Security Checks 20 Networking Services 20 Amazon Elastic Load Balancing Security 20 Amazon Virtual Private Cloud (Amazon VPC) Security 22 Amazon Route 53 Security 28 Amazon CloudFront Security 29 AWS Direct Connect Security 32 Appendix – Glos sary of T erms 33 Document Revisions 44 Jun 2016 44 Nov 2014 44 Nov 2013 44 May 2013 45 ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 5 of 45 Introduction Amazon Web Services (AWS) delivers a scalable cloud computing platform with high availability and dependability providing the tools that enable customers to run a wide range of applications Helping to protect the confidentiality integrity and availability of our customers’ systems and data is of the utmost importance to AWS as is maintaining customer trust and confidence This document is intended to answer questions such as “How does AWS help me protect my data?” Specifically AWS physical and operational security processes are described for the network and server infrastructure under AWS’ management as well as service specific security implementations Shared Security Responsibility Model When using AWS services customers maintain complete control over their content and are responsible for managing critical content security requirements including: • What content they choose to store on AWS • Which AWS services are used with the content • In what country that content is stored • The format and structure of that content and whether it is masked anonymised or encrypted • Who has access to that content and how those access rights are granted managed and revoked Because AWS customers retain control over their data they also retain responsibilities relating to that content as part of the AWS “shared responsibility” model This shared responsibility model is fundamental to understanding the respective roles of the customer and AWS in the context of the Cloud Security Principles Under the shared responsibility model AWS operates manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the services operate In turn customers assume responsibility for and management of their operating system (including updates and security patches) other associated application software as well as the configuration of the AWS provided security group firewall Customers should carefully consid er the services they choose as their responsibilities vary depending on the services they use the integration of those services into their IT environments and applicable laws and regulations It is possible to enhance security and/or meet more stringent compliance requirements by leveraging technology such as host based firewalls host based intrusion detection/ prevention and encryption AWS provides tools and information to assist customers in their efforts to account for and validate that controls ar e operating effectively in their extended IT environment More information can be found on the AWS Compliance center at http://awsamazoncom/compliance ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 6 of 45 Figure 1: AWS Shar ed Security Responsib ility Model The amount of security configuration work you have to do varies depending on which services you select and how sensitive your data is However there are certain security features such as individual user accounts and credentials SSL/TLS for data transmissions and user a ctivity logging that you should configure no matter which AWS service you use For more information about these security features see the “AWS Account Security Features” section below AWS Security Responsi bilities AWS is responsible for protecting the global infrastructure that runs all of the services offered in the AWS cloud This infrastructure is comprised of the hardware software networking and facilities that run AWS services Protecting this infrastructure is AWS ’ number one priority and while you can’t visit our data centers or offices to see this protection firsthand we provide several reports from third party auditors who have verified our compliance with a variety of computer security standards and regulatio ns (for more information visit ( awsamazoncom/compliance ) Note that in addition to protecting this global infrastructure AWS is responsible for the security configuration of its products that are considered managed services Examples of these types of services include Amazon DynamoDB Amazon RDS Amazon Redshift Amazon Elastic MapReduce Amazon WorkSpaces and several other services These services provide the scalability and flexibility of cloud based resources with the additional benefit of being managed For these services AWS will handle basic security tasks like guest operating system (OS) and database patching firewall configuration ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 7 of 45 and disaster recovery For most of these managed services all you have to do is configure logical access controls for the resources and protect your account credentials A few of them may require additional tasks such as setting up database user accounts but overall the security configuration work is performed by the service Customer Security Responsibilities With the AWS cloud you can provision virtual servers storage databases and desktops in minutes instead of weeks You can also use cloudbased analytic s and workflow tools to process y our data as you need it and then store it in the cloud or in your own data centers Whi ch AWS services you use will determ ine how much configuration wor k you have to perform as part of your security responsib ilities AWS products that fall into the well understood category of Infrastructure as a Serv ice (IaaS) such as Amazon EC2 and Amazon VPC are completely under your control and require you to perform all of the necessary security configuration and management tasks For example for EC2 instances you’re responsible for management of the guest OS (including updates and security patches) any application software or utilities you install on the instances and the configuration of the AWS provided firewall (called a security group) on each instance These are basically the same security tasks that you’re used to performing no matter where your servers are located AWS managed services like Amaz on RDS or Amaz on Redshift provide all of the resources you need in order to perform a specific task but without the configuration work that can c ome with them With managed services you don’t have to worr y about laun ching and maintaining instan ces patching the guest OS or database or replicating databases AWS handles that for you However as with all services you shou ld prote ct your AWS Account credentia ls and set up individu al user accounts with Amazon Identity and Access Management (IAM) so that each of your users has their own credentials and you can implement segregation of duties We also recommend usin g mult ifactor authent ication (MFA) with each account requ iring the use of SSL/TLS to commun icate with your AWS resources and setting up API/user activity logging with AWS CloudTrail For more information about additional measures you can take refer to the AWS Sec urity Resources webpage AWS Global Security Infrastructure AWS operates the global cloud infrastructure that you use to provision a variety of basic computing resources such as processing and storage The AWS global infrastructure includes the facilities network hardware and operational software (eg host OS virtualization software etc) that support the provisioning and use of these resources The AWS global infra structure is designed and managed according to security best practices as well as a variety of security compliance standards As an AWS customer you can be assured that you’re building web architectures on top of some of the most secure computing infrastr ucture in the world ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 8 of 45 AWS Compliance Program s Amazon Web Services Comp liance enables customers to understand the robust contro ls in place at AWS to maintain security and data protect ion in the cloud As systems are built on top of the AWS cloud infrastructure comp liance responsib ilities will be shared By tying together governance focused audit friend ly service features with applicable comp liance or audit standards AWS Comp liance enab lers build on traditional programs; help ing customers to establish and operate in an AWS security contro l environment The IT infrastructure that AWS provides to its customers is designed and managed in alignment with security best practices and a variety of IT securit y standards including: • SOC 1/SSAE 16/ISAE 3402 (formerly SAS 70) • SOC 2 • SOC 3 • FISMA • FedRAMP • DOD SRG Levels 2 and 4 • PCI DSS Level 1 • EU Model Clauses • ISO 9001 / ISO 27001 / ISO 27017 / ISO 27018 • ITAR • IRAP • FIPS 1402 • MLPS Level 3 • MTCS In addition the flexibility and control that the AWS platform provides allows customers to deploy solutions that meet several industry specific standards including: • Criminal Justice Information Services ( CJIS ) • Cloud Security Alliance ( CSA ) • Family Educational Rights and Privacy Act ( FERPA ) • Health Insurance Portability and Accountability Act ( HIPAA ) • Motion Picture Association of America ( MPAA ) ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 9 of 45 AWS provides a wide range of information regarding its IT control environment to customers through white papers reports certifications accreditations and other thirdparty attestations More information is available in the Risk and Compliance whitepaper available at http://awsamazoncom/compliance/ Physical and Environmental Security AWS’ data centers are state of the art utilizing innovative architectural and engineering approaches AWS has many years of experience in designing constructing and operating large scale data centers This experience has been applied to the AWS platform and infrastructure AWS dat a centers are housed in nondescript facilities Physical access is strictly controlled both at the perimeter and at building ingress points by professional security staff utilizing video surveillance intrusion detection systems and other electronic means Authorized staff must pass twofactor authentication a minimum of two times to access data center floors All visitors and contractors are required to present identification and are signed in and continually escorted by authorized staff AWS only provides data center access and information to employees and contractors who have a legitimate business need for such privileges When an employee no longer has a business need for these privileges his or her access is immediately revoked even if they continue to be an employee of Amazon or Amazon Web Services All physical access to data centers by AWS employees is logged and audited routinely Fire Detection and Suppression Automatic fire detection and suppression equipment has been installed to reduce risk The fire detection system utilizes smoke detection sensors in all data center environments mechanical and electrical infrastructure spaces chiller rooms and generator equipment rooms These areas are protected by either wet pipe double interlocked pre action or gaseous sprinkler systems Power The data center electrical power systems are designed to be fully redundant and maintainable without impact to operations 24 hours a day and seven days a week Uninterruptible Power Supply (UPS) units provide back up power in the event of an electrical failure for critical and essential loads in the facility Data centers use generators to provide back up power for the entire facility Climate and Temperature Climate control is required to maintain a constant operating temperature for servers and other hardware which prevents overheating and reduces the possibility of service outages Data centers are conditioned to maintain atmospheric conditions at optimal levels Personnel and systems monitor and control temperature and humidity at appropriate levels ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 10 of 45 Management AWS monitors electrical mechanical and life support systems and equipment so that any issues are immediately identified Preventative maintenance is performed to maintain the continued operability of equipment Storage Device Decommissioning When a storage device has reached the end of its useful life AWS procedures include a decommissioning process that is designed to prevent customer data from being exposed to unauthorized individuals AWS uses techniques detailed NIST 800 88 (“Guidelines for Media Sanitization as part of the decommissioning process“) Business Continuity Management AWS’ infrastructure has a high level of availability and provides customers the features to deploy a resilient IT architecture AWS has designed its systems to tolerate system or hardware failures with minimal customer impact Data center Business Continuity Management at AWS is under the direction of the Amazon Infrastructure Group Availability Data centers are built in clusters in various global regions All data centers are online and serving customers; no data center is “cold” In case of failure automated processes move customer data traffic away from the affected area Core applications are deployed in an N+1 configuration so that in the event of a data center failure there is sufficient capacity to enable traffic to be load balanced to the rem aining sites AWS provides you with the flexibility to place instances and store data within multiple geographic regions as well as across multiple availability zones within each region Each availability zone is designed as an independent failure zone T his means that availability zones are physically separated within a typical metropolitan region and are located in lower risk flood plains (specific flood zone categorization varies by Region) In addition to discrete uninterruptable power supply (UPS) and onsite backup generation facilities they are each fed via different grids from independent utilities to further reduce single points of failure Availability zones are all redundantly connected to multiple tier 1 transit providers You should architect your AWS usage to take advantage of multiple regions and availability zones Distributing applications across multiple availability zones provides the ability to remain resilient in the face of most failure modes including natural disasters or system failures Incident Response The Amazon Incident Management team employs industry standard diagnostic procedures to drive resolution during business impacting events Staff operators ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 11 of 45 provide 24x7x365 coverage to detect incidents and to manage the impact and resolution Company Wide Executive Review Amazon’s Internal Audit group regularly reviews AWS resiliency plans which are also periodically reviewed by members of the Senior Executive management team and the Audit Committee of the Board of Directors Commu nication AWS has implemented various methods of internal communication at a global level to help employees understand their individual roles and responsibilities and to communicate significant events in a timely manner These methods include orientation and training programs for newly hired employees; regular management meetings for updates on business performance and other matters; and electronic means such as video conferencing electronic mail messages and the posting of information via the Amazon int ranet AWS has also implemented various methods of external communication to support its customer base and the communit y M echan isms are in place to allow the customer support team to be notified of operational issues that impact the customer experience A "Service Health Dashboard " is available and maintained by the customer support team to alert customers to any issues that may be of broad impact The “AWS Security Center ” is available to provide you with securit y and comp liance details about AWS You can also subscribe to AWS Support offerin gs that include direct commun ication with the customer support team and proacti ve alerts to any c ustomer impacting issues AWS Access The AWS Production network is segregated from the Amazon Corporate network and requires a separate set of credentials for logical access The Amazon Corporate network relies on user IDs passwords and Kerberos while the AWS Production network requires SSH public key authentication through a bastion host AWS developers and administrators on the Amazon Corporate network who need to access AWS cloud components must explicitly request access through the AWS access management sy stem All requests are reviewed and approved by the appropriate owner or manager Account Review and Audit Accounts are reviewed every 90 days; explicit re approval is required or access to the resource is automatically revoked Access is also automatical ly revoked when an employee’s record is terminated in Amazon’s Human Resources system Windows and UNIX accounts are disabled and Amazon’s permission management system removes the user from all systems ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 12 of 45 Requests for changes in access are captured in the Amazon permissions management tool audit log When changes in an employee’s job function occur continued access must be explicitly approved to the resource or it will be automatically revoked Background Checks AWS has established formal policies and procedures to delineate the minimum standards for logical access to AWS platform and infrastructure hosts AWS conducts criminal background checks as permitted by law as part of pre employment screening practices for employees and commensurate with the empl oyee’s position and level of access The policies also identify functional responsibilities for the administration of logical access and security Credentials Policy AWS Security has established a credentials policy with required configurations and expiration intervals Passwords must be complex and are forced to be changed every 90 days Secure Design Principles AWS’ development process follows secure software development best practices which include formal design reviews by the AWS Security Team threat modeling and completion of a risk assessment Static code analysis tools are run as a part of the standard build process and all deployed software undergoes recurring penetration testing performed by carefully selected industry experts Our security risk assessment reviews begin during the design phase and the engagement lasts through launch to ongoing operations Change Management Routine emergency and configuration changes to existing AWS infrastructure are authorized logged tested approved and documented in accordance with industry norms for similar systems Updates to AWS’ infrastructure are done to minimize any impact on the customer and their use of the services AWS will communicate with customers either via email or through the AWS Service Health Dashboard (when service use is likely to be adversely affected ) Software AWS applies a systematic approach to managing change so that changes to customerimpacting services are thoroughly revie wed tested approved and well communicated The AWS change management process is designed to avoid unintended service disruptions and to maintain the integrity of service to the customer Changes deployed into production environments are: • Reviewed: Peer reviews of the technical aspects of a change are required • Tested: Changes being applied are tested to help ensure they will behave as expected and not adversely impact performance ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 13 of 45 • Approved: All changes must be authorized in order to provide appropriate oversight and understanding of business impact Changes are typically pushed into production in a phased deployment starting with lowest impact areas Deployments are tested on a single system and closely monitored so impacts can be evaluated Service owners have a number of configurable metrics that measure the health of the service’s upstream dependencies These metrics are closely monitored with thresholds and alarming in place Rollback procedures are documented in the Change Management (CM) ticket When possible changes are scheduled during regular change windows Emergency changes to production systems that require deviations from standard change management procedures are associated with an incident and are logged and approved as appropriate Perio dically AWS performs self audits of changes to key services to monitor quality maintain high standards and facilitate continuous improvement of the change management process Any exceptions are analyzed to determine the root cause and appropriate actio ns are taken to bring the change into compliance or roll back the change if necessary Actions are then taken to address and remediate the process or people issue Infrastructure Amazon’s Corporate Applications team develops and manages software to automa te IT processes for UNIX/Linux hosts in the areas of third party software delivery internally developed software and configuration management The Infrastructure team maintains and operates a UNIX/Linux configuration management framework to address hardw are scalability availability auditing and security management By centrally managing hosts through the use of automated processes that manage change AWS is able to achieve its goals of high availability repeatability scalability security and disaster recovery Systems and network engineers monitor the status of these automated tools on a continuous basis reviewing reports to respond to hosts that fail to obtain or update their configuration and software Internally developed configuration management software is installed when new hardware is provisioned These tools are run on all UNIX hosts to validate that they are configured and that software is installed in compliance with standards determined by the role assigned to the host This configurati on management software also helps to regularly update packages that are already installed on the host Only approved personnel enabled through the permissions service may log in to the central configuration management servers AWS Account Security Features AWS provides a variety of tools and features that you can use to keep your AWS Account and resources safe from unauthorized use This includes credentials for access control HTTPS endpoints for encrypted data transmission the creation of separate IAM u ser accounts user activity logging for security monitoring and Trusted Advisor security checks You can take advantage of all of these security tools no matter which AWS ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 14 of 45 services you select AWS Credentials To help ensure that only authorized users and processes access your AWS Account and resources AWS uses several types of credentials for authentication These include passwords cryptographic keys digital signatures and certificates We also provide the option of requiring multi factor authentication (MFA) to log into your AWS Account or IAM user accounts The following table highlights the various AWS credentials and their uses: Credentia l Type Use Descrip tion Passwords AWS root account or IAM user account login to the AWS Management Console A string of characters used to log into your AWS account or IAM account AWS passwords must be a minimum of 6 characters and may be up to 128 characters MultiFactor Authentication (MFA) AWS root account or IAM user account login to the AWS Management Console A sixdigit single use code that is required in addition to your password to log in to your AWS Account or IAM user account Access Keys Digitally signed requests to AWS APIs (using the AWS SDK CLI or REST /Query APIs) Includes an access key ID and a secret access key You use access keys to digitally sign programmat ic requests that you make to AWS Key Pairs • SSH login to EC2 instances • CloudFront signed URLs • Windows instances To log in to your instance you must create a key pair specify the name of the key pair when you launch the instance and provide the private key when you connect to the instance Linux instances have no password and you use a key pair to log in using SSH With Windows instances you use a key pair to obtain the administrator password and then log in using RDP ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 15 of 45 X509 Certificates • Digita lly signed SOAP requests to AWS APIs • SSL server certificates for HTTPS X509 certificates are only used to sign SOAP based requests (curren tly used only with Amazon S3) You can have AWS create an X509 certificate and private key that you can download or you can upload your own certificate by using the Credential Report You can download a Credential Report for your account at any time from the Security Credentials page This report lists all of your account’s users and the status of their credentials whether they use a password whether their password expires and must b e changed regularly the last time they changed their password the last time they rotated their access keys and whether they have MFA enabled For security reasons if your credentials have been lost or forgotten you cannot recover them or re download them However you can create new credentials and then disable or delete the old set of credentials In fact AWS recommends that you change (rotate) your access keys and certificates on a regular basis To help you do this without potential impact to your application’s availability AWS supports multiple concurrent access keys and certificates With this feature you can rotate keys and certificates into and out of operation on a regular basis without any downtime to your application This can help to mit igate risk from lost or compromised access keys or certificates The AWS IAM API enables you to rotate the access keys of your AWS Account as well as for IAM user accounts Passwords Passwords are required to access your AWS Account individual IAM user accounts AWS Discussion Forums and the AWS Support Center You specify the password when you first create the account and you can change it at any time by going to the Security Credentials page AWS passwords can be up to 128 characters long and contain special characters so we encourage you to create a strong password that cannot be easily guessed You can set a password policy for your IAM user accounts to ensure that strong passwords are used and that they are changed often A password policy is a set of rules that define the type of password an IAM user can set For more information about password policies go to Managing Passwords in Using IAM AWS Multi Factor Authentication (AWS MFA) ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 16 of 45 AWS Multi Factor Authentication (AWS MFA) is an additional la yer of security for accessing AWS services When you enable this optional feature you will need to provide a six digit single use code in addition to your standard user name and password credentials before access is granted to your AWS Account settings or AWS services and resources You get this single use code from an authentication device that you keep in your physical possession This is called multi factor authentication because more than one authentication factor is checked before access is granted: a password (something you know) and the precise code from your authentication device (something you have) You can enable MFA devices for your AWS Account as well as for the users you have created under your AWS Account with AWS IAM In addition you add MF A protection for access across AWS Accounts for when you want to allow a user you’ve created under one AWS Account to use an IAM role to access resources under another AWS Account You can require the user to use MFA before assuming the role as an additio nal layer of security AWS MFA supports the use of both hardware tokens and virtual MFA devices Virtual MFA devices use the same protocols as the physical MFA devices but can run on any mobile hardware device including a smartphone A virtual MFA devic e uses a software application that generates six digit authentication codes that are compatible with the Time Based One Time Password (TOTP) standard as described in RFC 6238 Most virtual MFA applications allow you to host more than one virtual MFA device which makes them more convenient than hardware MFA devices However you should be aware that because a virtual MFA might be run on a less secure device such as a smartphone a virtual MFA might not provide the same level of security as a hardware MFA device You can also enforce MFA authentication for AWS service APIs in order to provide an extra layer of protection over powerful or privileged actions such as terminating Amazon EC2 instances or reading sensitive data stored in Amazon S3 You do this b y adding an MFA authentication requirement to an IAM access policy You can attach these access policies to IAM users IAM groups or resources that support Access Control Lists (ACLs) like Amazon S3 buckets SQS queues and SNS topics It is easy to obta in hardware tokens from a participating third party provider or virtual MFA applications from an AppStore and to set it up for use via the AWS website More information about AWS MFA is available on the AWS websit e Access Keys AWS requires that all API requests be signed —that is they must include a digital signature that AWS can use to verify the identity of the requestor You calculate the digital signature using a cryptographic hash function The input to the hash function in this case includes the text of your request and your secret access key If you use any of the AWS SDKs to generate requests the digital signature ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 17 of 45 calculation is done for you; otherwise you can have your application calculate it and include it in your REST or Query requests by following the directions in our documentation Not only does the signing process help protect message integrity by preventing tampering with the request while it is in transit it also helps protect against potential replay attacks A request must reach AWS within 15 minutes of the time stamp in the request Otherwise AWS denies the request The most recent version of the digital signature calculation process is Signature Version 4 which calculates the signature using the HMAC SHA256 protocol Version 4 provides an additional measure of protection over previous versions by requiring that you sign the message using a key that is derived from your secret access key rather than using the secret access key itself In addition you der ive the signing key based on credential scope which facilitates cryptographic isolation of the signing key Because access keys can be misused if they fall into the wrong hands we encourage you to save them in a safe place and not embed them in your cod e For customers with large fleets of elastically scaling EC2 instances the use of IAM roles can be a more secure and convenient way to manage the distribution of access keys IAM roles provide temporary credentials which not only get automatically loaded to the target instance but are also automatically rotated multiple times a day Key Pairs Amazon EC2 uses public –key cryptography to encrypt and decrypt login information Public –key cryptography uses a public key to encrypt a piece of data such as a password then the recipient uses the private key to decrypt the data The public and private keys are known as a key pair To log in to your instance you must create a key pair specify the name of the key pair when you launch the instance and provide the private key when you connect to the instance Linux instances have no password and you use a key pair to log in using SS H With Windows instances you use a key pair to obtain the administrator password and then log in using RDP Creating a Key Pair You can use Amazon EC2 to create your key pair For more information see Creating Your Key Pair Using Amazon EC2 Alternatively you could use a third party tool and then import the public key to Amazon EC2 For more information see Importing Your Own Key Pair to Amazon EC2 Each key pair requires a name Be sure to choose a name that is easy to ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 18 of 45 remember Amazon EC2 associates the public key with the name that you specify as the key name Amazon EC2 stores the public key only and you store the private key Anyone who possesses your private key can decrypt your login information so it's important that you store your private keys in a secure place The keys that Amazon EC2 uses are 2048 bit SSH 2 RSA keys You can have up to five thousand key pairs per region X509 Certificates X509 certificates are used to sign SOAP based requests X509 certificates contain a public key and additional metadata (like an expiration date that AWS verifies when you upload the certificate) and is associated with a private key When you create a request you create a digital signature with your private key and then inc lude that signature in the request along with your certificate AWS verifies that you're the sender by decrypting the signature with the public key that is in your certificate AWS also verifies that the certificate you sent matches the certificate that y ou uploaded to AWS For your AWS Account you can have AWS create an X509 certificate and private key that you can download or you can upload your own certificate by using the Security Credentials page For IAM users you must create the X509 certifica te (signing certificate) by using third party software In contrast with root account credentials AWS cannot create an X509 certificate for IAM users After you create the certificate you attach it to an IAM user by using IAM In addition to SOAP reque sts X509 certificates are used as SSL/TLS server certificates for customers who want to use HTTPS to encrypt their transmissions To use them for HTTPS you can use an open source tool like OpenSSL to create a unique private key You’ll need the private key to create the Certificate Signing Request (CSR) that you submit to a certificate authority (CA) to obtain the server certificate You’ll then use the AWS CLI to upload the certificate private key and certificate chain to IAM You’ll also need an X509 certificate to create a customized Linux AMI for EC2 instances The certificate is only required to create an instance backed AMI (as opposed to an EBS backed AMI) You can have AWS create an X509 certificate and private key that you can download or y ou can upload your own certificate by using the Security Credentials page Individual User Accounts AWS provides a centralized mechanism called AWS Identity and Access Management ( IAM ) for creating and managing individual users within your AWS Account A user can be any individual system or application that interacts with AWS resources either programmatically or through the AWS Management ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 19 of 45 Console or AWS Command Line Interface (CLI) Each user has a unique name within the AWS Account and a unique set of security credentials not shared with other users AWS IAM eliminates the need to share passwords or keys and enables you to minimize the use of your AWS Account credentials With IAM you define policies that control which AWS services your users can access and what they can do with them You can grant users only the minimum permissions they need to perform their jobs See the AWS Identity and Access Management (AWS IAM) section below for more information Secure HTTPS Access Points For greater communication security when accessing AWS resources you should use HTTPS instead of HTTP for data transmissions HTTPS uses the SSL/TLS protocol which uses public key cryptography to prevent eavesdropping tampering a nd forgery All AWS services provide secure customer access points (also called API endpoints) that allow you to establish secure HTTPS communication sessions Several services also now offer more advanced cipher suites that use the Elliptic Curve Diffie Hellman Ephemeral (ECDHE) protocol ECDHE allows SSL/TLS clients to provide Perfect Forward Secrecy which uses session keys that are ephemeral and not stored anywhere This helps prevent the decoding of captured data by unauthorized third parties even if the secret long term key itself is compromised Security Logs As important as credentials and encrypted endpoints are for preventing security problems logs are just as crucial for understanding events after a problem has occurred And to be effective as a security tool a log must include not just a list of what hap pened and when but also identify the source To help you with your after thefact investigations and near realtime intrusion detection AWS CloudTrail provides a log of requests for AWS resources within your account for supported services For each event you can see what service was accessed what action was performed and who made the request CloudTrail captures information about every API call to every supported AWS resource including sign in events Once you have enabled CloudTrail event logs are delivered every 5 minutes You can configure CloudTrail so that it aggregates log files from multiple regions into a single Amazon S3 bucket From there you can then upload them to your favorite log management and analysis solutions to perform security analysis and detect user behavior patterns By default log files are stored securely in Amazon S3 but you can also archive them to Amazon Glacier t o help meet audit and compliance requirements ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 20 of 45 In addition to CloudTrail’s user activity logs you can use the Amazon CloudWatch Logs feature to collect and monitor system application and custom log files from your EC2 instances and other sources in nea rreal time For example you can monitor your web server's log files for invalid user messages to detect unauthorized login attempts to your guest OS AWS Trusted Advisor Security Checks The AWS Trusted Advisor customer support service not only monitors for cloud performance and resiliency but also cloud security Trusted Advisor inspects your AWS environment and makes recommendations when opportunities may exist to save money improve system performance or close security gaps It provides alerts on sev eral of the most common security misconfigurations that can occur including leaving certain ports open that make you vulnerable to hacking and unauthorized access neglecting to create IAM accounts for your internal users allowing public access to Amazon S3 buckets not turning on user activity logging (AWS CloudTrail) or not using MFA on your root AWS Account You also have the option for a Security contact at your organization to automatically receive a weekly email with an updated status of your Trust ed Advisor security checks The AWS Trusted Advisor service provides four checks at no additional charge to all users including three important security checks: specific ports unrestricted IAM use and MFA on root account And when you sign up for Busine ss or Enterprise level AWS Support you receive full access to all Trusted Advisor checks Networking Services Amazon Web Services provides a range of networking services that enable you to create a logically isolated network that you define establish a private network connection to the AWS cloud use a highly available and scalable DNS service and deliver content to your end users with low latency at high data transfer speeds with a content delivery web service Amazon Elastic Load Balancing Security Amazon Elastic Load Balancing is used to manage traffic on a fleet of Amazon EC2 instances distributing traffic to instances across all availability zones within a region Elastic Load Balancing has all the advantages of an on premises load balancer plus several security benefits: • Takes over the encryption and decryption work from the Amazon EC2 instances and manages it centrally on the load balancer • Offers clients a single point of contact and can also serve as the first line of defense against attacks on your network • When used in an Amazon VPC supports creation and management of security groups associated with your Elastic Load Balancing to provide additional networking and security options ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 21 of 45 • Supports end toend traffic encryption using TLS (previously SSL) on those networks that use secure HTTP (HTTPS) connections When TLS is used the TLS server certificate used to terminate client connections can be managed centrally on the load balancer rather than on every individual instance HTTPS/TLS uses a long term secret key to generate a short term session key to be used between the server and the browser to create the ciphered (encrypted) message Amazon Elastic Load Balancing configures your load balancer with a predefined cipher set that is used for T LS negotiation when a connection is established between a client and your load balancer The pre defined cipher set provides compatibility with a broad range of clients and uses strong cryptographic algorithms However some customers may have requirements for allowing only specific ciphers and protocols (such as PCI SOX etc) from clients to ensure that standards are met In these cases Amazon Elastic Load Balancing provides options for selecting different configurations for TLS protocols and ciphers Y ou can choose to enable or disable the ciphers depending on your specific requirements To help ensure the use of newer and stronger cipher suites when establishing a secure connection you can configure the load balancer to have the final say in the ciph er suite selection during the client server negotiation When the Server Order Preference option is selected the load balancer will select a cipher suite based on the server’s prioritization of cipher suites rather than the client’s This gives you more c ontrol over the level of security that clients use to connect to your load balancer For even greater communication privacy Amazon Elastic Load Balancer allows the use of Perfect Forward Secrecy which uses session keys that are ephemeral and not stored anywhere This prevents the decoding of captured data even if the secret long term key itself is compromised Amazon Elastic Load Balancing allows you to identify the originating IP address of a client connecting to your servers whether you’re using HTT PS or TCP load balancing Typically client connection information such as IP address and port is lost when requests are proxied through a load balancer This is because the load balancer sends requests to the server on behalf of the client making your load balancer appear as though it is the requesting client Having the originating client IP address is useful if you need more information about visitors to your applications in order to gather connection statistics analyze traffic logs or manage whitel ists of IP addresses Amazon Elastic Load Balancing access logs contain information about each HTTP and TCP request processed by your load balancer This includes the IP address and port of the requesting client the backend IP address of the instance tha t processed ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 22 of 45 the request the size of the request and response and the actual request line from the client (for example GET http://wwwexamplecom: 80/HTTP/11) All requests sent to the load balancer are logged including requests that never made it to b ack end instances Amazon Virtual Private Cloud (Amazon VPC) Security Normally each Amazon EC2 instance you launch is randomly assigned a public IP address in the Amazon EC2 address space Amazon VPC enables you to create an isolated portion of the AWS c loud and launch Amazon EC2 instances that have private (RFC 1918 ) addresses in the range of your choice (eg 10000/16 ) You can define subnets within your VPC group ing simil ar kinds of instances based on IP address rang e and then set up routin g and secur ity to contro l the flow of traffi c in and out of the instan ces and subnets AWS offers a variet y of VPC archite cture templates with configurations that provi de varying levels of public access : • VPC with a single public subnet only Your instances run in a private isolated section of the AWS cloud with direct access to the Internet Network ACLs and security groups can be used to provide strict control over inbound and outbound network traffic to your instances • VPC with public and private subnets In addition to containing a public subnet this configuration adds a private subnet whose instances are not addressable from the Internet Instances in the private subnet can establish outbound connections to the Internet via the public su bnet using Network Address Translation (NAT) • VPC with public and private subnets and hardware VPN access This configuration adds an IPsec VPN connection between your Amazon VPC and your data center effectively extending your data center to the cloud while also providing direct access to the Internet for public subnet instances in your Amazon VPC In this configuration customers add a VPN appliance on their corporate data center side • VPC with private subnet only and hardware VPN access Your instance s run in a private isolated section of the AWS cloud with a private subnet whose instances are not addressable from the Internet You can connect this private subnet to your corporate data center via an IPsec VPN tunnel You can also connect two VPCs usin g a p rivate IP address which allows instances in the two VPCs to communicate with each other as if they are within the same networ k You can c reate a VPC peerin g connect ion between your own VPCs or with a VPC in another AWS account within a single region Security feature s within Amazon VPC inclu de security group s network ACLs routin g tables and externa l gateways Each of these items is complementary to providing a ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 23 of 45 secure isolated network that can be extended throu gh selective enab ling of direct Internet access or private c onnect ivity to another network Am azon EC2 instance s runn ing within an Amazon VPC inherit all of the benefits describ ed belo w related to the guest OS and prote ction again st packet sniffing Note howe ver that you must create VPC securit y groups specifically for your Amazon VPC; any Amaz on EC2 secur ity groups you have created will not work inside your Amazon VPC Also Amaz on VPC securit y groups have additional capab ilities that Amazon EC2 secur ity groups do not have such as bein g able to change the security group after the instance is launched and bein g able to specify any proto col with a standard protoco l number (as opposed to just TCP UDP or ICM P) Each Amaz on VPC is a distinct isolated netwo rk with in the cloud; netwo rk traffi c within each Amazon VPC is isolated from all other Amaz on VPC s At creation time you select an IP address range for each Amazon VPC You may c reate and attach an Internet gateway virtual private gatewa y or both to estab lish externa l connec tivity subject to the contro ls below API Access : Calls to create and delete Amaz on VPCs change routin g securit y group and netwo rk ACL parameters and perform other functions are all signed by y our Amazon Secret Acce ss Key which could be either the AWS Account ’s Secret Access Key or the Secret Access key of a user created with AWS IAM Without access to your Secret Access Key Amazon VPC API calls cannot be made on your behal f In addition API calls can be encr ypted with SSL to maintain confidentialit y Amazon recommends alwa ys usin g SSLprote cted API endpo ints AWS IAM also enables a customer to further contro l what APIs a newly created user has perm issions to call Subn ets a nd Rou te Tables: You create one or more subnets within each Amazon VPC; each instance launched in the Amazon VPC is connected to one subnet Traditional Layer 2 securit y attacks including MAC spoo fing and ARP spoo fing are blocked Each subnet in an Amaz on VPC is associated with a routing table and all network traffic leaving the subnet is processed by the routin g table to determ ine the dest ination Firewa ll (Securi ty Groups): Like Amazon EC2 Amazon VPC supports a complete firew all solution enab ling filterin g on both ingress and egress traffic from an instance The default group enables inbound commun ication from other members of the same group and outbound communication to any destination Traffic can be restricted by any IP protoco l by service port as well as source/destination IP address (individu al IP or Classless InterDomain Routin g (CIDR) block) The firewall isn’t contro lled throu gh the guest OS; rather it can be mod ified only throu gh the invocation of Amazon VPC APIs AWS supports the ability to grant granular access to different adm inistrati ve functions on the instances and the firewall therefore enab ling you to implement additional securit y throu gh separation ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 24 of 45 of duties The level of secur ity afforded by the firew all is a function of which ports you open and for what durat ion and purpose Wellinformed traffi c management and secur ity design are still required on a perinstance basis AWS further encourages you to apply additional perinstance filters with host based firewal ls such as IPtables or the Win dows Firewall Figure 5: A mazon VPC Netwo rk Architectu re Netwo rk Access Control Lists: To add a further layer of secur ity within Amazon VPC you can c onfigure netwo rk ACLs These are stateless traffi c filters that apply to all traffi c inbound or outbound from a subnet within Amazon VPC These ACL s can contain ordered rules to allow or deny traffic based upon IP protoco l by se rvice port as well as source/destination IP address Like securit y groups networ k ACL s are managed throu gh Amazon VPC APIs adding an additional layer of protection and enab ling additional securit y throu gh separation of duties The diagram below depicts how the secur ity contro ls above interrelate to enab le flexible networ k topo logies while providing c omplete contro l over networ k traffic flows ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 25 of 45 Figure 6: Flexible N etwo rk Topologies Virtual Priv ate Gateway: A virtual private gateway enables private connec tivity between the A mazon VPC and another netwo rk Netwo rk traffic with in each virtual private gatewa y is isolated from netwo rk traff ic within all other virtual private gateways You can estab lish VPN connect ions to the virtual private gateway from gateway devices at your prem ises Each connection is secured by a preshared key in conjun ction with the IP address of the customer gatewa y device Internet Gateway: An Internet gateway may be attached to an A mazon VPC to enable direct connect ivity to Amazon S3 other AWS services and the Internet Each instance desirin g this access must either have an Elas tic IP asso ciated with it or route traffi c throu gh a NAT instance Additionally netwo rk routes are configured (see above) to direct traffic to the Internet gateway AWS provides reference NAT AMIs that you can extend to perform networ k logging deep packet inspection application layer filterin g or other securit y contro ls This access can only be mod ified throu gh the invocation of Amazon VPC APIs AWS supports the ability to grant granular access to different adm inistrative functions on the instances and the Internet gateway therefore enab ling y ou to implement additional secur ity throu gh separation of duties ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 26 of 45 Dedic ated Instances: Within a VPC you can launch Amazon EC2 instances that are physically isolated at the host hardware level (ie they will run on singletenant hardware ) An A mazon VPC can be created with ‘dedicated ’ tenan cy so that all instances launched into the Amazon VPC will utiliz e this feature Alternativel y an Amazon VPC may be created with ‘default ’ tenan cy but you can specif y dedicated tenan cy for parti cular instances launched into it Elastic Netwo rk Interfa ces: Each Amaz on EC2 instance has a default networ k interface that is assigned a private IP address on your Amazon VPC netwo rk You can c reate and attach an additional netwo rk interface known as an elasti c netwo rk interface (ENI) to any Amazon EC2 instance in your Amazon VPC for a total of two netwo rk interfaces per instance Attach ing more than one networ k interface to an instance is useful when you want to create a management netwo rk use netwo rk and security appliances in your Amazon VPC or create dualhomed instances with workloads/ro les on distin ct subnets An ENI' s attributes including the private IP address elastic IP addresses and MAC address will follow the ENI as it is attached or detached from an instance and reattached to another instance More information about Amazon VPC is availab le on the AWS website: http:/ /awsamaz oncom/ vpc/ Addi tiona l Netwo rk Access Control wi th EC2VPC If you launch instances in a region where you did not have instances before AWS launched the new EC2 VPC feature (also called Default VPC) all instances are automatic ally provisioned in a ready touse default VPC You can c hoose to create additional VPCs or you can create VPCs for instances in regions where you alread y had instances before we launched EC2VPC If you create a VPC later using regular VPC you specif y a CIDR block create subnets enter the routin g and security for those subnets and provision an Internet gateway or NAT instance if you want one of your subnets to be able to reach the Internet When you launch EC2 instances into an EC2 VPC most of this work is automati cally performed for you When you launch an instance into a default VPC usin g EC2VPC we do the following to set it up for you: • Create a default subnet in each Availability Zone • Create an Internet gateway and connect it to your default VPC • C reate a main route table for your default VPC with a rule that sends all traffic destined for the Internet to the Internet gateway • Create a default security group and associate it with your default VPC • Create a default network access control list (ACL) and associate it with your default VPC • Associate the default DHCP options set for your AWS account with your default VPC In addition to the default VPC having its own private IP range EC2 instances launched in a default VPC can also recei ve a publi c IP ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 27 of 45 The following table summa rizes the differences between instances launched into EC2 Classic instan ces launched into a default VPC and instan ces launched into a nondefault VPC Charac teristic EC2Classic EC2VPC (Default VPC ) Regula r VPC Public IP address Your instance receives a public IP address Your instance launched in a default subnet receives a public IP address by default unless you specify otherwise during launch Your instance doesn't receive a public IP address by default unless you specify otherwise during launch Private IP address Your instance receives a private IP address from the EC2 Classic range each time it's started Your instance receives a stat ic private IP address from the address range of your default VPC Your instance receives a static private IP address from the address range of your VPC Multiple p rivate IP addresses We select a single IP address for your instance Multiple IP addre sses are not supported You can assign multiple private IP addresses to your instance You can assign multiple private IP addresses to your instance Elastic IP address An EIP is disassociated from your instance when you stop it An EIP remains associated with your instance when you stop it An EIP remains associated with your instance when you stop it DNS hostnames DNS hostnames are enabled by default DNS hostnames are enabled by default DNS hostnames are disabled by default Security group A secu rity group can reference secu rity groups that belong to other AWS accounts A secu rity group can reference secu rity groups for your VPC only A secu rity group can reference secu rity groups for your VPC only Security group association You must terminate your instance to change its secu rity group You can change the security group of your running instance You can change the security group of your running instance Security group rules You can add rules for inbound traffic only You can add rules for inbound and outbound traffic You can add rules for inbound and outbound traffic ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 28 of 45 Tenancy Your instance runs on shared hard ware; you cannot run an instance on single tenant hard ware You can run your instance on shared hardware or single tenant hard ware You can run your instance on shared hardware or single tenant hardware Note that security groups for instances in EC2 Classic are slightly different than security groups for instances in EC2 VPC For example you can add rules for inbound traffic for EC2 Classic but you can add rules for both inbound and outbound traffic to EC2 VPC In EC2 Classic you can’t change the security groups assigned to an instance after it’s launched but in EC2 VPC yo u can change security groups assigned to an instance after it’s launched In addition you can't use the security groups that you've created for use with EC2 Classic with instances in your VPC You must create security groups specifically for use with instances in your VPC The rules you create for use with a security group for a VPC can't reference a security group for EC2 Classic and vice versa Amazon Route 53 Security Amazon Route 53 is a highly available and scalable Domain Name System (DNS) service that answers DNS queries translating domain names into IP addresses so computers can communicate with each other Route 53 can be used to connect user requests to infrastructure running in AWS – such as an Amazon EC2 instance or an Amazon S3 bucket – or to infrastructure outside of AWS Amazon Route 53 lets you manage the IP addresses (records) listed for your domain names and it answers requests (queries) to translate specific domain names into their corresponding IP addresses Queries for your domain are automatically routed to a nearby DNS server using anycast in order to provide the lowest latency possible Route 53 makes it possible for you to manage traffic globally through a variety of routing types including Latency Based Routing (LBR) Geo DNS and Weighted Round Robin (WRR) — all of which can be combined with DNS Failover in order to help create a variety of low latency fault tolerant architectures The failover algorithms implemented by Amazon Route 53 are designed not only to route traffic to endpoints that are healthy but also to help avoid making disaster scenarios worse due to misconfigured health checks and applications endpoint overloads and partition failures Route 53 also offers Domain Name Registration – you can purchase and manage domain names such as examplecom and Route 53 will automatically configure default DNS settings for your domains You can buy manage and transfer (both in and out) domains from a wide selection of generic and country specific top level domains (TLDs) D uring the registration process you have the option to enable privacy protection for your domain This option will hide most of your personal ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 29 of 45 information from the public Whois database in order to help thwart scraping and spamming Amazon Route 53 is buil t using AWS’ highly available and reliable infrastructure The distributed nature of the AWS DNS servers helps ensure a consistent ability to route your end users to your application Route 53 also helps ensure the availability of your website by providing health checks and DNS failover capabilities You can easily configure Route 53 to check the health of your website on a regular basis (even secure web sites that are available only over SSL) and to switch to a backup site if the primary one is unresponsi ve Like all AWS Services Amazon Route 53 requires that every request made to its control API be authenticated so only authenticated users can access and manage Route 53 API requests are signed with an HMAC SHA1 or HMAC SHA256 signature calculated from the request and the user’s AWS Secret Access key Additionally the Amazon Route 53 control API is only accessible via SSL encrypted endpoints It supports both IPv4 and IPv6 routing You can control access to Amazon Route 53 DNS management functions by c reating users under your AWS Account using AWS IAM and controlling which Route 53 operations these users have permission to perform Amazon CloudFront Security Amazon CloudFront gives customers an easy way to distribute content to end users with low latency and high data transfer speeds It delivers dynamic static and streaming content using a global network of edge locations Requests for customers’ objects are automatically routed to the nearest edge location so content is delivered with the best possible performance Amazon CloudFront is optimized to work with other AWS services lik e Amazon S3 Amazon EC2 Elastic Load Balancing and Amazon Route 53 It also works seamlessly with any non AWS origin server that stores the original definitive ver sions of your files Amazon CloudFront requires every request made to its control API be authenticated so only authorized users can create modify or delete their own Amazon CloudFront distributions Requests are signed with an HMAC SHA1 signature calculated from the request and the user’s private key Additionally the Amazon CloudFront control API is only accessible via SSL enabled endpoints There is no guarantee of durability of data held in Amazon CloudFront edge locations The service may from time to time remove objects from edge locations if those objects are not requested frequently Durability is provided by Amazon S3 which works as the origin server for Amazon CloudFront holding the original defin itive copies of objects delivered by Amazon CloudFront If you want contro l over who is able to downlo ad content from Am azon CloudFront you can enab le the service’s private content feature This feature has two ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 30 of 45 components : the first controls how content is delivered from the Amazon CloudFront edge lo cation to view ers on the Internet The second controls how the Amaz on Cloud Front edge locati ons access objects in Amazon S3 CloudFront also supports Geo Restriction which restricts access to your content based on the geographic location of your viewers To control access to the original copies of your objects in Amazon S3 Amazon CloudFront allows you to create one or more “Origin Access Identities” and associate these with your distributions When an Origin Access Identity is associated with an Amazon CloudFront distribution the distribution will use that identity to retrieve objects from Amazon S3 You can then use Amazon S3’s ACL feature which limits access to that Origin Access Identity so the original copy of the object is not publicly readable To control who is able to download objects from Amazon CloudFront edge locations the service uses a signed URL verification system To use this system you first create a public private key pair and upload the public key to your account via the AWS Management Console Second you configure your Amazon CloudFront distribution to indicate which accounts you would authorize to sign requests – you can indicate up to five AWS Accounts you trust to sign requests Third as you receive requests you will create po licy documents indicating the conditions under which you want Amazon CloudFront to serve your content These policy documents can specify the name of the object that is requested the date and time of the request and the source IP (or CIDR range) of the c lient making the request You then calculate the SHA1 hash of your policy document and sign this using your private key Finally you include both the encoded policy document and the signature as query string parameters when you reference your objects Whe n Amazon CloudFront receives a request it will decode the signature using your public key Amazon CloudFront will only serve requests that have a valid policy document and matching signature Note that private content is an optional feature that must be enabled when you set up your CloudFront distribution Content delivered without this feature enabled will be publicly readable Amazon CloudFront provides the option to transfer content over an encrypted connection (HTTPS) By default CloudFront will acc ept requests over both HTTP and HTTPS protocols However you can also configure CloudFront to require HTTPS for all requests or have CloudFront redirect HTTP requests to HTTPS You can even configure CloudFront distributions to allow HTTP for some objects but require HTTPS for other objects ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 31 of 45 Figure 7: A mazon CloudFront Encrypted Transmission You can configure one or more CloudFront origins to require CloudFront fetch objects from your origin using the protocol that the viewer used to request the objects For example when you use this CloudFront setting and the viewer uses HTTPS to request an object from CloudFront CloudFront also uses HTTPS to forward the request to your origin Amazon CloudFront supports the TLSv11 and TLSv12 protocols for HTTPS connections between CloudFront and your custom origin webserver (along with SSLv3 and TLSv10) and a selection of cipher suites that includes the Elliptic Curve Diffie Hellman Ephemeral (ECDHE) protocol on connections to both viewers and the origi n ECDHE allows SSL/TLS clients to provide Perfect Forward Secrecy which uses session keys that are ephemeral and not stored anywhere This helps prevent the decoding of captured data by unauthorized third parties even if the secret long term key itself is compromised Note that if you're using your own server as your origin and you want to use HTTPS both between viewers and CloudFront and between CloudFront and your origin you must install a valid SSL certificate on the HTTP server that is signed by a th ird party certificate authority for example VeriSign or DigiCert By default you can deliver content to viewers over HTTPS by using your CloudFront distribution domain name in your URLs; for example https://dxxxxxcloudfrontnet/imagejpg If you want to deliver your content over HTTPS using your own domain name and your own SSL certificate you can use SNI Custom SSL or Dedicated IP Custom SSL With Server Name Identification (SNI) Custom SSL CloudFront relies on the SNI extension of the TLS protocol which is supported by most modern web browsers However some users may not be able to access your content because some older browsers do not support SNI With Dedicated IP Custom SSL CloudFront dedicates IP addresses to your SSL certificate at each CloudFront edge location so that CloudFront can associate the incoming requests with the proper SSL certificate Amazon CloudFront access logs contain a comprehensive set of information about requests fo r content including the object requested the date and time of the request ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 32 of 45 the edge location serving the request the client IP address the referrer and the user agent To enable access logs just specify the name of the Amazon S3 bucket to store the logs in when you configure your Amazon CloudFront distribution AWS Direct Connect Security With AWS Direct Connect you can provision a direct link between your internal network and an AWS region using a high throughput dedicated connection Doing this may help reduce your network costs improve throughput or provide a more consistent network experience With this dedicated connection in place you can then create virtual interfaces directly to the AWS cloud (for example to Amazon EC2 and Amazon S3) With AWS Direct Connect you bypass Internet service providers in your network path You can procure rack space within the facility housing the AWS Direct Connect location and deploy your equipment nearby Once deployed you can connect this equipment to AWS Direct Connect using a cross connect Each AWS Direct Connect location enables connectivity to the geographically nearest A WS region You can access all AWS services available in that region AWS Direct Connect locations in the US can also access the public endpoints of the other AWS regions using a public virtual interface Using industry standard 8021q VLANs the dedicated connection can be partitioned into multiple virtual interfaces This allows you to use the same connection to access public resou rces such as objects stored in Amazon S3 using public IP address space and private resources such as Amazon EC2 instances running within an Amazon VPC using private IP space while maintaining network separation between the public and private environments AWS Direct Connect requires the use of the Border Gateway Protocol (BGP) with an Autonomous System Number (ASN) To create a virtual interface you use an MD5 cryptographic key for message authorization MD5 creates a keyed hash using your secret key Y ou can have AWS automatically generate a BGP MD5 key or you can provide your own Further Reading https://awsamazoncom/security/security resources/ Introduction to AWS Security Processes Overview of AWS Security Storage Services Overview of AWS Security Database Services Overview of AWS Security Compute Services Overview of AWS Security Application Services Overview of AWS Security Analytics Mobile and Application Services Overview of AWS Security – Network Services ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 33 of 45 Appen dix – Glos sary of Terms Access Key ID : A string that AWS distributes in order to uniquely identify each AWS user; it is an alphanumeric token associated with your Secret Access Key Access control list (ACL) : A list of permissions or rules for accessing an object or network resource In Amazon EC2 security groups act as ACLs at the instance level controlling which users have permission to access specific instances In Amazon S3 you can use ACLs to give read or write access on buckets or objects to groups of users In Amazon VPC ACLs act like network firewalls and control access at the subnet level AMI : An Amazon Machine Image (AMI) is an encrypted machine image stored in Amazon S3 It contains all the information necessary to boot instances of a customer’s software API : Application Programming Interface (API) is an interface in computer science that defines the ways by which an application program may request services from libraries and/or operating systems Archive : An archive in Amazon Glacier is a file that you want to store and is a base unit of storage in Amazon Glacier It can be any data such as a photo video or document Each archive has a unique ID and an optional description Authentication : Authentication is the process of determining whether someone or something is in fact who or what it is declared to be Not only do users need to be authenticat ed but every program that wants to call the functionality exposed by an AWS API must be authenticated AWS requires that you authenticate every request by digitally signing it using a cryptographic hash function Auto Scaling : An AWS service that allows customers to automatically scale their Amazon EC2 capacity up or down according to conditions they define Availability Zone : Amazon EC2 locations are composed of regions and availability zones Availability zones are distinct locations that are engineere d to be insulated from ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 34 of 45 failures in other availability zones and provide inexpensive low latency network connectivity to other availability zones in the same region Bastion host : A computer specifically configured to withstand attack usually placed on the external/public side of a demilitarized zone (DMZ) or outside the firewall You can set up an Amazon EC2 instance as an SSH bastion by setting up a public subnet as part of an Amazon VPC Bucket : A container for objects stored in Amazon S3 Every object is contained within a bucket For example if the object named photos/puppyjpg is stored in the johnsmith bucket then it is addressable using the URL: http ://johnsmiths3amazonawscom/photos/pupp yjpg Certific ate: A credential that some AWS products use to authenticate AWS Accounts and users Also known as an X509 certificate The certificate is paired with a private key CIDR Block : Classless Inter Domain Routing Block of IP addresses Client side encryption : Encrypting data on the client side before uploading it to Amazon S3 CloudFormation: An AWS provisioning tool that lets customers record the baseline configuration of the AWS resources needed to run their applications so that they can provision and update them in an orderly and predictable fashion Cognito : An AWS service that simplifies the task of authenticating users and storing managing and syncing their data across multiple devices platforms and applications It works with multiple existing identity providers and also supports unauthenticated guest users Credentials : Items that a user or process must have in order to confirm to AWS services during the authentication process that they are au thorized to access the service AWS credentials include passwords secret access keys as well as X509 certificates and multi factor tokens Dedicated instance : Amazon EC2 instances that are physically isolated at the host hardware level (ie they will run on single tenant hardware) Digital signature : A digital signature is a cryptographic method for demonstrating the authenticity of a digital message or document A valid digital signature gives a recipient reason to believe that the message was create d by an authorized sender and that it was not altered in transit Digital signatures are used ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 35 of 45 by customers for signing requests to AWS APIs as part of the authentication process Direct Connect Service : Amazon service that allows you to provision a direct link between your internal network and an AWS region using a high throughput dedicated connection With this dedicated connection in place you can then create logical connections directly to the AWS cloud (for example to Amazon EC2 and Amazon S3) and Amazon VPC bypassing Internet service providers in the network path DynamoDB Service : A managed NoSQL database service from AWS that provides fast and predictable performance with seamless scalability EBS : Amazon Elastic Block Store (EBS) provides block level storage volumes for use with Amazon EC2 instances Amazon EBS volumes are off instance storage that persists independently from the life of an instance ElastiCache: An AWS web service that allows you to set up manage and scale distributed in memory cache environments in the cloud The service improves the performance of web applications by allowing you to retrieve information from a fast managed in memory caching system instead of relying entirely on slower disk based databases Elastic Beanstalk : An AWS deployment and management tool that automates the functions of capacity provisioning load balancing and auto scaling for customers’ applications Elastic IP Address : A static public IP address that you can assign to any instance in an Amazon VPC thereby making the instance public Elastic IP addresses also enable you to mask instance failures by rapidly remapping your public IP addresses to any instance in the VPC Elastic Load Balancing : An AWS service that is used to manage traffic on a fleet of Amazon EC2 instances distributing traffic to instances across all availability zones within a region Elastic Load Balancing has all the advantages of an on premises load balancer plus several security benefits such as taking over the encr yption/decryption work from EC2 instances and managing it centrally on the load balancer Elastic MapReduce (EMR) Service: An AWS service that utilizes a hosted Hadoop framework running on the web scale infrastructure of Amazon EC2 and Amazon S3 Elastic MapReduce enables customers to easily and cost effectively process extremely large quantities of data (“big data”) ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 36 of 45 Elastic Network Interface : Within an Amazon VPC an Elastic Network Interface is an optional second network interface that you can attach to an EC2 instance An Elastic Network Interface can be useful for creating a management network or using network or security appliances in the Amazon VPC It can be easily detached from an instance and reattached to another instance Endpoint : A URL that is the entry point for an AWS service To reduce data latency in your applications most AWS services allow you to select a regional endpoint to make your requests Some web services allow you to use a general endpoint that doesn't specify a region; these generic endpoints resolve to the service's us east1 endpoint You can connect to an AWS endpoint via HTTP or secure HTTP (HTTPS) using SSL Federated users : User systems or applications that are not currently authorized to access your AWS services but that you want to give temporary access to This access is provided using the AWS Security Token Service (STS) APIs Firewall : A hardware or software component that controls incoming and/or outgoing network traffic according to a specific set of rules Us ing firewall rules in Amazon EC2 you specify the protocols ports and source IP address ranges that are allowed to reach your instances These rules specify which incoming network traffic should be delivered to your instance (eg accept web traffic on port 80) Amazon VPC supports a complete firewall solution enabling filtering on both ingress and egress traffic from an instance The default group enables inbound communication from other members of the same group and outbound communication to any destin ation Traffic can be restricted by any IP protocol by service port as well as source/destination IP address (individual IP or Classless Inter Domain Routing (CIDR) block) Guest OS : In a virtual machine environment multiple operating systems can run on a single piece of hardware Each one of these instances is considered a guest on the host hardware and utilizes its own OS Hash : A cryptographic hash function is used to calculate a digital signature for signing requests to AWS APIs A cryptographic h ash is a one way function that returns a unique hash value based on the input The input to the hash function includes the text of your request and your secret access key The hash function returns a hash value that you include in the request as your signa ture HMAC SHA1/HMAC SHA256 : In cryptography a keyed Hash Message Authentication Code (HMAC or KHMAC) is a type of message authentication code (MAC) calculated using a specific algorithm involving a cryptographic hash function ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 37 of 45 in combination with a secr et key As with any MAC it may be used to simultaneously verify both the data integrity and the authenticity of a message Any iterative cryptographic hash function such as SHA 1 or SHA 256 may be used in the calculation of an HMAC; the resulting MAC algorithm is termed HMAC SHA1 or HMAC SHA256 accordingly The cryptographic strength of the HMAC depends upon the cryptographic strength of the underlying hash function on the size and quality of the key and the size of the hash output length in bits Hard ware security module (HSM) : An HSM is an appliance that provides secure cryptographic key storage and operations within a tamper resistant hardware device HSMs are designed to securely store cryptographic key material and use the key material without expo sing it outside the cryptographic boundary of the appliance The AWS CloudHSM service provides customers with dedicated single tenant access to an HSM appliance Hypervisor : A hypervisor also called Virtual Machine Monitor (VMM) is computer software/hardware platform virtualization software that allows multiple operating systems to run on a host computer concurrently Identity and Access Management (IAM) : AWS IAM enables you to create multiple users and manage the permissions for each of these users wi thin your AWS Account Identity pool : A store of user identity information in Amazon Cognito that is specific to your AWS Account Identity pools use IAM roles which are permissions that are not tied to a specific IAM user or group and that use temporary security credentials for authenticating to the AWS resources defined in the role Identity Provider : An online service responsible for issuing identification information for users who would like to interact with the service or with other cooperating serv ices Examples of identity providers include Facebook Google and Amazon Import/Export Service : An AWS service for transferring large amounts of data to Amazon S3 or EBS storage by physically shipping a portable storage device to a secure AWS facility Instance : An instance is a virtualized server also known as a virtual machine (VM) with its own hardware resources and guest OS In EC2 an instance represents one running copy of an Amazon Machine Image (AMI) IP address : An Internet Protocol (IP) address is a numerical label that is assigned ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 38 of 45 to devices participating in a computer network utilizing the Internet Protocol for communication between its nodes IP spoofing : Creation of IP packets with a forged source IP address called spoofing with the p urpose of concealing the identity of the sender or impersonating another computing system Key : In cryptography a key is a parameter that determines the output of a cryptographic algorithm (called a hashing algorithm) A key pair is a set of security credentials you use to prove your identity electronically and consists of a public key and a private key Key rotation : The process of periodically changing the cryptographic keys used for encrypting data or digitally signing requests Just like changing pas swords rotating keys minimizes the risk of unauthorized access if an attacker somehow obtains your key or determines the value of it AWS supports multiple concurrent access keys and certificates which allows customers to rotate keys and certificates into and out of operation on a regular basis without any downtime to their application Mobile Analytics : An AWS service for collecting visualizing and understanding mobile application usage data It enables you to track customer behaviors aggregate metrics and identify meaningful patterns in your mobile applications Multi factor authentication (MFA) : The use of two or more authentication factors Authentication factors include something you know (like a password) or something you have (like a token that generates a random number) AWS IAM allows the use of a six digit single use code in addition to the user name and password credentials Customers get this single use code from an authentication device that they keep in their physical possession (ei ther a physical token device or a virtual token from their smart phone) Network ACLs : Stateless traffic filters that apply to all traffic inbound or outbound from a subnet within an Amazon VPC Network ACLs can contain ordered rules to allow or deny traf fic based upon IP protocol by service port as well as source/destination IP address Object : The fundamental entities stored in Amazon S3 Objects consist of object data and metadata The data portion is opaque to Amazon S3 The metadata is a set of nam evalue pairs that describe the object These include some default metadata such as the date last modified and standard HTTP metadata such as Content Type The developer can also specify custom metadata at the time the Object is stored ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 39 of 45 Paravirtualization : In computing paravirtualization is a virtualization technique that presents a software interface to virtual machines that is similar but not identical to that of the underlying hardware Peering : A VPC peering connection is a networking connection betw een two VPCs that enables you to route traffic between them using private IP addresses Instances in either VPC can communicate with each other as if they are within the same network Port scanning : A port scan is a series of messages sent by someone atte mpting to break into a computer to learn which computer network services each associated with a "well known" port number the computer provides Region: A named set of AWS resources in the same geographical area Each region contains at least two availab ility zones Replication : The continuous copying of data from a database in order to maintain a second version of the database usually for disaster recovery purposes Customers can use multiple AZs for their Amazon RDS database replication needs or use Read Replicas if using MySQL Relational Database Service (RDS) : An AWS service that allows you to create a relational database (DB) instance and flexibly scale the associated compute resources and storage capacity to meet application demand Amazon RDS i s available for Amazon Aurora MySQL PostgreSQL Oracle Microsoft SQL Server and MariaDB database engines Role : An entity in AWS IAM that has a set of permissions that can be assumed by another entity Use roles to enable applications running on your Amazon EC2 instances to securely access your AWS resources You grant a specific set of permissions to a role use the role to launch an Amazon EC2 instance and let EC2 automatically handle AWS credential management for your applications that run on Amazo n EC2 Route 53: An authoritative DNS system that provides an update mechanism that developers can use to manage their public DNS names answering DNS queries and translating domain names into IP address so computers can communicate with each other Secr et Access Key : A key that AWS assigns to you when you sign up for an AWS Account To make API calls or to work with the command line interface each AWS user needs the Secret Access Key and Access Key ID The user signs each request ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 40 of 45 with the Secret Access Key and includes the Access Key ID in the request To help ensure the security of your AWS Account the Secret Access Key is accessible only during key and user creation You must save the key (for example in a text file that you store securely) if you wa nt to be able to access it again Security group : A security group gives you control over the protocols ports and source IP address ranges that are allowed to reach your Amazon EC2 instances; in other words it defines the firewall rules for your instan ce These rules specify which incoming network traffic should be delivered to your instance (eg accept web traffic on port 80) Security Token Service (STS) : The AWS STS APIs return temporary security credentials consisting of a security token an Access Key ID and a Secret Access Key You can use STS to issue security credentials to users who need temporary access to your resources These users can be existing IAM users non AWS users (federated identities) systems or applications that need to a ccess your AWS resources Server side encryption (SSE) : An option for Amazon S3 storage for automatically encrypting data at rest With Amazon S3 SSE customers can encrypt data on upload simply by adding an additional request header when writing the object Decryption happens automatically when data is retrieved Service: Software or computing ability provided across a network (eg Amazon EC2 Amazon S3) Shard : In Amazon Kinesis a shard is a uniquely identified group of data records in an Amazon Kinesis stream A Kinesis stream is composed of multiple shards each of which provides a fixed unit of capacity Signature : Refers to a digital signature which is a mathematical way to confirm the authenticity of a digital message AWS uses signatures c alculated with a cryptographic algorithm and your private key to authenticate the requests you send to our web services Simple Data Base (Simple DB) : A non relational data store that allows AWS customers to store and query data items via web services req uests Amazon SimpleDB creates and manages multiple geographically distributed replicas of the customer’s data automatically to enable high availability and data durability Simple Email Service (SES) : An AWS service that provides a scalable bulk and tran sactional email sending service for businesses and developers In order to maximize deliverability and dependability for senders Amazon SES takes proactive ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 41 of 45 steps to prevent questionable content from being sent so that ISPs view the service as a trusted e mail origin Simple Mail Transfer Protocol (SMTP) : An Internet standard for transmitting email across IP networks SMTP is used by the Amazon Simple Email Service Customers who used Amazon SES can use an SMTP interface to send email but must connect to an SMTP endpoint via TLS Simple Notification Service (SNS) : An AWS service that makes it easy to set up operate and send notifications from the cloud Amazon SNS provides developers with the ability to publish messages from an application and immediate ly deliver them to subscribers or other applications Simple Queue Service (SQS) : A scalable message queuing service from AWS that enables asynchronous message based communication between distributed components of an application The components can be com puters or Amazon EC2 instances or a combination of both Simple Storage Service (Amazon S3) : An AWS service that provides secure storage for object files Access to objects can be controlled at the file or bucket level and can further restricted based on other conditions such as request IP source request time etc Files can also be encrypted automatically using AES 256 encryption Simple Workflow Service (SWF) : An AWS service that allows customers to build applications that coordinate work across distri buted components Using Amazon SWF developers can structure the various processing steps in an application as “tasks” that drive work in distributed applications Amazon SWF coordinates these tasks managing task execution dependencies scheduling and concurrency based on a developer’s application logic Single sign on: The capability to log in once but access multiple applications and systems A secure single sign on capability can be provided to your federated users (AWS and non AWS users) by creating a URL that passes the temporary security credentials to the AWS Management Console Snapshot : A customer initiated backup of an EBS volume that is stored in Amazon S3 or a customer initiated backup of an RDS database that is stored in Amazon RDS A snaps hot can be used as the starting point for a new EBS volume or Amazon RDS database or to protect the data for long term durability and recovery Secure Sockets Layer (SSL) : A cryptographic protocol that provides security ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 42 of 45 over the Internet at the Application Layer Both the TLS 10 and SSL 30 protocol specifications use cryptographic mechanisms to implement the security services that establish and maintain a secure TCP/IP connection The secure connection prevents eavesdropping tampering or message forgery You can connect to an AWS endpoint via HTTP or secure HTTP (HTTPS) using SSL Stateful firewall : In computing a stateful firewall (any firewall that performs stateful packet inspection (SPI) or stateful inspection) is a firewall that keeps track of the state of network connections (such as TCP streams UDP communication) traveling across it Storage Gateway : An AWS service that securely connects a customer’s on premises software appliance with Amazon S3 storage by using a VM that the custome r deploys on a host in their data center running VMware ESXi Hypervisor Data is asynchronously transferred from the customer’s on premises storage hardware to AWS over SSL and then stored encrypted in Amazon S3 using AES 256 Temporary security credenti als: AWS credentials that provide temporary access to AWS services Temporary security credentials can be used to provide identity federation between AWS services and non AWS users in your own identity and authorization system Temporary security credentials consist of security token an Access Key ID and a Secret Access Key Transcoder : A system that transcodes (converts) a media file (audio or video) from one format size or quality to another Amazon Elastic Transcoder makes it easy for customers to transcode video files in a scalable and cost effective fashion Transport Layer Security (TLS) : A cryptographic protocol that provides security over the Internet at the Application Layer Customers who used Amazon’s Simple Email Service must connect to an SMTP endpoint via TLS Tree hash : A tree hash is generated by computing a hash for each megabyte sized segment of the data and then combining the hashes in tree fashion to represent ever growing adjacent segments of the data Amazon Glacier checks the ha sh against the data to help ensure that it has not been altered en route Vault : In Amazon Glacier a vault is a container for storing archives When you create a vault you specify a name and select an AWS region where you want to create the vault Each vault resource has a unique address Versioning : Every object in Amazon S3 has a key and a version ID Objects with ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 43 of 45 the same key but different version IDs can be stored in the same bucket Versioning is enabled at the bucket layer using PUT Bucket versio ning Virtual Instance : Once an AMI has been launched the resulting running system is referred to as an instance All instances based on the same AMI start out identical and any information on them is lost when the instances are terminated or fail Virt ual MFA : The capability for a user to get the six digit single use MFA code from their smart phone rather than from a token/fob MFA is the use of an additional factor (the single use code) in conjunction with a user name and password for authentication Virtual Private Cloud (VPC) : An AWS service that enables customers to provision an isolated section of the AWS cloud including selecting their own IP address range defining subnets and configuring routing tables and network gateways Virtual Private N etwork (VPN): The capability to create a private secure network between two locations over a public network such as the Internet AWS customers can add an IPsec VPN connection between their Amazon VPC and their data center effectively extending their dat a center to the cloud while also providing direct access to the Internet for public subnet instances in their Amazon VPC In this configuration customers add a VPN appliance on their corporate data center side WorkSpaces : An AWS managed desktop service that enables you to provision cloud based desktops for your users and allows them to sign in using a set of unique credentials or their regular Active Directory credentials X50 9: In cryptography X509 is a standard for a Public Key Infrastructure (PKI) for single sign on and Privilege Management Infrastructure (PMI) X509 specifies standard formats for public key certificates certificate revocation lists attribute certificates and a certification path validation algorithm Some AWS products use X509 certificates instead of a Secret Access Key for access to certain interfaces For example Amazon EC2 uses a Secret Access Key for access to its Query interface but it uses a signing certificate for access to its SOAP interface and command line tool interface WorkDocs : An AWS managed enterprise storage and sharing service with feedback capabilities for user collaboration ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 44 of 45 Document Revisions Jun 2016 • Updated compliance programs • Updated regions Nov 2014 • Updated compliance programs • Updated shared security responsibility model • Updated AWS Account security features • Reorganized services into categories • Updated several services with new features: CloudWatch CloudTrail CloudFront EBS ElastiCache Redshift Route 53 S3 Trusted Advisor and WorkSpaces • Added Cognito Security • Added Mobile Analytics Security • Added WorkDocs Security Nov 2013 • Updated regions • Updated several services with new features: CloudFront DirectConnect DynamoDB EBS ELB EMR Amazon Glacier IAM OpsWorks RDS Redshift Route 53 Storage Gateway and VPC • Added AppStream Security • Added CloudTrail Security • Added Kinesis Security • Added WorkSpaces Security ArchivedAmazon Web Services – Overview of Security Processes June 2016 Page 45 of 45 May 2013 • Updated IAM to incorporate roles and API access • Updated MFA for API access for customer specified privileged actions • Updated RDS to add event notification multi AZ and SSL to SQL Server 2012 • Updated VPC to add multiple IP addresses static routing VPN and VPC By Default • Updated several other services with new features: CloudFront CloudWatch EBS ElastiCache Elastic Beanstalk Route 53 S3 Storage Gateway • Added Glacier Security • Added Redshift Security • Added Data Pipeline Security • Added Transcoder Security • Added Trusted Advisor Security • Added OpsWorks Security • Added CloudHSM Security
General
Designing_MQTT_Topics_for_AWS_IoT_Core
This version has been archived For the latest version refer t o https://docsawsamazoncom/whitepapers/latest/designingmqtttopicsawsiot core/designingmqtttopicsawsiotcorehtml?did=wp_card&trk=wp_card Designing MQTT Topics for AWS IoT Core May 2019 This version has been archived For the latest version refer t o https://docsawsamazoncom/whitepapers/latest/designingmqtttopicsawsiot core/designingmqtttopicsawsiotcorehtml?did=wp_card&trk=wp_card Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 201 9 Amazon Web Services Inc or its affiliates All rights reserved
General
Automating_Governance_on_AWS
ArchivedAutomating Governance A Managed Service Approach to Security and Compliance on AWS August 2015 THIS PAPER HAS BEEN ARCHIVED For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 2 of 39 © 2015 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditio ns or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 3 of 39 Contents Abstract 4 Introduction 4 Shared Responsibility Environment 6 Compliance Requirements 7 Compliance and Governance 8 Challenges in Architecting for Governance 9 Implementin g a Managed Services Organization 10 Standardizing Architecture for Compliance 14 Architectural Baselines 14 The Shared Services VPC 18 Automating for Compliance 20 Automating Compliance for EC2 Instances 23 Development & Management 25 Deployment 28 Automating for Governance: HighLevel Steps 33 Step 1: Define Common Use Cases 34 Step 2: Create and Document Reference Architectures 35 Step 3: Validate and Document Architecture Compliance 35 Step 4: Build Automated Solutions Based on Architecture 36 Step 5: Develop an Accreditation and Approval Process 37 Conclusion 37 Contributors 38 Notes 38 ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 4 of 39 Abstract This whitepaper is intended for existing and potential Amazon Web Services (AWS) customers who are implementing security controls for applications running on AWS It provides guidelines for developing and implementing a managed service approach to deploying applications in AWS The guidelines described provide enterprise customers with greater control over their applications while accelerating the process of deploying authorizing and monitoring these applications This paper is targeted at IT decision makers and security personnel and assumes familiarity with basic networking operating system data encryption and operational control security practices Introduction Governance encompasses an organization’s mission longterm goals responsibilities and decision making Gartner describes governance as “the processes that ensure the effective and efficient use of IT in enabling an organization to achieve its goals ”1 An effective governance strategy defines both the frameworks for achieving goals and the decision makers who create them:  Frameworks – The policies principles and guidelines that drive consistent IT decision making  Decision makers – The entities or individuals who are responsible and accountable for IT decisions Welldeveloped frameworks ultimately can yield an efficient secure and compliant technology environment This paper describes how to develop and automate these frameworks by introducing the following concepts and practices:  A managed service organization (MSO) that is part of a centralized cloud governance model  Roles and responsibilities of the MSO on the customer side of the AWS shared responsibility model ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 5 of 39  Shared services and the use of Amazon Virtual Private Cloud (Amazon VPC) within AWS  Architectural baselines for establishing minimum configuration requirements for applications being deployed in AWS  Automation methods that can facilitate application deployment and simplify compliance accreditation ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 6 of 39 Shared Responsibility Environment Moving IT infrastructure to services in AWS creates a model of shared responsibility between the customer and AWS This shared model helps relieve the operational burden on the customer because AWS operates manages and controls the IT components from the host operating system and virtualization layer down to the physical security of the facilities in which the services operate The customer assumes responsibility for and management of the guest operating system (including responsibility for updates and security patche s) and other associated application software and the configuration of the AWSprovided security group firewall Customers must carefully consider the services they choose because their responsibilities vary depending on the services they use the integration of those services into their IT environment and applicable laws and regulations Figure 1: The AWS Shared Responsibility Model ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 7 of 39 This customer/AWS shared responsibility model also extends to IT controls Just as AWS and its customers share the responsibility for operating the IT environment they also share the management operation and verification of IT controls AWS can help relieve the customer of the burden of operating controls by managing those controls associated with the physical infrastructure deployed in the AWS environment that might previously have been managed by the customer Customers can shift the management of certain IT controls to AWS which results in a (new) distributed control environment Customers can then use the AWS control and compliance documentation to perform their control evaluation and verification procedures as required under the applicable compliance standard Compliance Requirements The infrastructure and services provided by AWS are approved to operate under several compliance standards and industry certifications These certifications cover only the AWS side of the shared responsibility model; customers retain the responsibility for certifying and accrediting workloads that are deployed on top of the AWSprovided services that they run The following common compliance standards have unique requirements that customers must consider:  NIST SP 800 532–Published by the National Institute of Standards in Technology (NIST) NIST SP 800 53 is a catalog of security controls which most US federal agencies must comply with and which are widely used within private sector enterprises Provides a risk management framework that adheres to the Federal Information Processing Standard (FIPS)  FedRAMP3–A US government program for ensuring standards in security assessment authorization and continuous monitoring FedRAMP follows the NIST 800 53 security control standards  DoD Cloud Security Model (CSM)4–Standards for cloud computing issued by the US Defense Information Systems Agency (DISA) and documented in the Department of Defense (DoD) Security Requirements Guide (SRG) Provides an authorization process for DoD workload owners who have unique architectural requirements depending on impact level ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 8 of 39  HIPAA5 – The Health Insurance Portability and Accountability Act (HIPAA) contains strict security and compliance standards for organizations processing or storing Protected Health Information (PHI)  ISO 270016 – ISO 27001 is a widely adopted global security standard that outlines the requirements for information security management systems It provides a systematic approach to managing company and customer information that’s based on periodic risk assessments  PCI DSS7 – Payment Card Industry (PCI) Data Security Standards (DSS) are strict security standards for preventing fraud and protecting cardholder data for merchants that process credit card payments Evaluating systems in the cloud can be a challenge unless there are architectural standards that align with compliance requirements These architectural standards are especially critical for customers who must prove their systems meet strict compliance standards before they are permitted to go into production Compliance and Governance AWS customers are required to continue to maintain adequate governance over the entire IT control environment regardless of whether it is deployed in a traditional data center or in the cloud Leading governance practices include:  Understanding required compliance objectives and requirements (from relevant sources)  Establishing a control environment that meets those objectives and requirements  Understanding the validation required based on the organization’s risk tolerance  Verifying the operational effectiveness of the control environment Deployment in the AWS cloud gives organizations options to apply various types of controls and verification methods Workload owners can follow these basic steps to ensure strong governance and compliance: 1 Review information from AWS and other sources to understand the entire IT environment ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 9 of 39 2 Document all compliance requirements 3 Design and implement control objectives to meet the organization’s compliance requirements 4 Identify and document controls owned by outside parties 5 Verify that all control objectives are met and all key controls are designed and operating effectively Approaching compliance governance in this manner will help customers gain a better understanding of their control environment and help clearly define the verification activities that must be performed For more information on governance in the cloud see Security at Scale: Governance in AWS8 Challenges in Architecting for Governance AWS provides a high level of flexibility in how customers can design architectures for their applications in the cloud AWS has documented best practices in the whitepapers user guides API references and other resources that describe how to design for elasticity availability and security But these resources alone do not prevent bad design and improper configuration Architectural decisions that impact security can put customer data or personal information at risk and create liability Consider the following challenges:  Building a single workload with different architecture choices that is still compliant  The need to individually assess each of these unique architectures  The high level of flexibility leaves room for error and serious mistakes can be resolved only by redeployment of the application  Security analysts may not understand the differences between the many architectural decisions ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 10 of 39 Learning Curve By deploying applications in AWS workload owners and developers have a much greater level of control over and access to resources beyond the operating system and software However the number of decisions required when building an architecture can be overwhelming for those new to AWS Some of the se architectural decisions include how to address:  Amazon VPC structure and network controls  AWS Identity and Access Management (IAM) configuration policies permissions; Amazon Simple Storage Service (S3) bucket policies  Storage and database options  Load balancing  Monitoring options alerts tagging  Aggregation analysis and storage considerations for logging produced by a workload or AWS service Implementing a Managed Services Organization To implement governance AWS customers have begun establishing centralized teams within their organizations that facilitate the migration of legacy applications and the development of new applications Such a team can be called a provisioning team a center of excellence a broker and most commonly the managed service organization (MSO) which is the term we use Customers use an MSO to establish repeatable processes and templates for deploying applications to AWS while maintaining organizational control over their enterprise’s applications When the MSO function is outsourced it is generally referred to as a managed service partner (MSP) Many MSPs are validated by AWS under our Managed Service Program9 Understanding the enterprise’s cloud governance model is key to determining the provisioning strategy for accounts Amazon VPCs and applications and for deciding how to automate these processes Large enterprises generally centrally manage cloud operations at some level It is important to find the optimal balance between central management and decentralized control10 ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 11 of 39 In a centralized governance model an MSO provides the minimum requirements for workload owners who are deploying applications in the cloud:  Guardrails for security data protection and disaster recovery  Shared services for security continuous monitoring connectivity and authentication  Auditing the deployments of workload owners to ensure adherence to security and compliance standards For most large enterprises there are typical ly two sets of cloud governance roles involved in the deployment of applications:  MSO – As previously mentioned a component of centralized cloud governance; responsibilities can include account provisioning establishment of connectivity and Amazon VPC networking security auditing hosting of shared services billing and cost management  Workload Owners – Those who are directly responsible for the deployment development and maintenance of applications; a workload owner can be a cost center or a department and may include system administrators developers and others directly responsible directly for one or more applications Enterprise customers establish an MSO when there are common functions that can be centralized to ensure that applications are deployed in a secure and compliant fashion The MSO can also accelerate the rate of migration through reuse of approved configurations which minimizes development and approval time while ensuring compliance through the automated implementation of organization al security requirements ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 12 of 39 Figure 2: Shared Responsibility Between the CSP the MSO and the Workload Owner Adding an MSO allows the authorization documentation of the workload owner to be scoped down to only the configuration and installation of software specific to a particular application because the workload owner inherits a significant portion of the security control implementation from AWS and the organization’s MSO Establishing an MSO requires some up front work but this investment provides enhanced control over applications increased speed to deployment decreased time to authorization and overall enhancement of the enterprise’s security posture Common Activities of the MSO MSOs implemented by AWS customers often perform the following activities:  Account provisioning After reviewing the workload owner’s use case the MSO establishes the initial account connects it to the appropriate account for consolidated billing and configures basic security functionality prior to granting access to the workload owner ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 13 of 39  Security oversight Centralized account provisioning allows the MSO to implement features that enable security personnel to monitor the application as it is deployed and managed; the MSO might perform activities such as establishing an auditor group with crossaccount access and linking the application VPC to a shared services VPC that is controlled by the MSO  Amazon VPC configuration Deploying the VPC and its subnets including configuring security groups and network ACLs To maintain tighter control over the application VPCs the MSO may retain control of VPC configuration and require the workload owner to request desired changes to network security  IAM configuration Creating user groups and assignment of rights including creation of groups for internal auditors an IAM superuser and application administrative groups segregated by functionality (eg database and Unix administrators)  Development and approval of templates Creating preapproved AWS CloudFormation templates for common use cases Using templates allows workload owners to inherit the security implementation of the approved template thereby limiting their authorization documentation to the features that are unique to their application Templates can be reused to shorten the time required to approve and deploy new applications  AMI creation and management Creating a library of common approved Amazon Machine Images (AMIs) for the organization allowing centralized management and updating of machine images Creating common templates allows the MSO to enforce the use of approved AMIs  Development of a shared services VPC A shared service VPC allows the MSO to receive continuous monitoring feeds from the organization’s application VPC and to provide common shared services that are required for their organization This often includes a shared access management platform logging endpoints and the aggregation of configuration information ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 14 of 39 Standardizing Architecture for Compliance The solution to the challenge of implementing security controls for applications running on AWS is to build standardized automated and repeatable architectures that can be deployed for common use cases Automation can help customers easily meet the foundational requirements for buildi ng a secure application in the AWS cloud while providing a level of uniformity that follows proven best practices Architectural Baselines To determine the best method for standardizing and automating architecture in AWS establish baseline requirements up front These are the minimum common requirements to which most (or all) workloads must adhere An enterprise’s baseline requirements normally follow preexisting compliance controls regulatory guidelines security standards and best practices Typically a central department or group of individuals who are also involved in the monitoring auditing and evaluation of systems that are being deployed establish standard architectures based upon their baseline compliance and operational requirements Standard architectures can be shared among multiple applications and use cases within an organization This provides efficiency and uniformity and reduces the time and effort spent in designing architectures for new applications on AWS In an organization with a centralized cloud model these standard architectures are deployed during the account provisioning or application onboarding process Access Control/IAM Configuration IAM is central to securely controlling access to AWS resources Administrators can create users groups and roles with specific access policies to control which actions users and applications can perform through the AWS Management Console or AWS API Federation allows IAM roles to be mapped to permissions from central directory services The enterprise should determine how to implement the following IAM controls:  Standard users groups or both that will exist in every account ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 15 of 39  Crossaccount roles or federated roles  Roles for EC2 instances and application access to the AWS API  Roles requiring access to S3 buckets and other shared resources  Security requirements such as password policies and multifactor authentication (MFA) Networking/VPC Configuration Network boundaries and components are critical to deploying a secure architecture in the cloud An Amazon VPC is a logically isolated section of the AWS cloud which can be configured to enforce these network boundaries An AWS account can have one or more Amazon VPCs Subnets are logical groupings of IP address space within an Amazon VPC and exist within a single Availability Zone (AZ) A VPC strategy depends on the requirements of a common use case Amazon VPCs can be designated based on application lifecycle (production development) or on role (management shared services) A well documented Amazon VPC strategy will also take into account:  The number of Amazon VPCs per AWS account  The subnet structure within an Amazon VPC: the number of subnets and routing capabilities of each subnet  High availability requirements: Amazon VPC subnet s across availability zones (AZs)  Connectivity options: internet gateways virtual private gateways and routing AWS provides the components necessary for controlling the network boundaries of an application in an Amazon VPC The following table lists examples of Amazon VPC networking controls that can be utilized in AWS Control Implementation Protection Provided VPC Routing Tables Control which VPC subnets may communicate directly with the Internet Provides segmentation and broad reduction of attack surface area per subnet VPC Network Subnet level all traffic allowed by Provides blacklist protection for ports ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 15 of 39 Access Control Lists (NACLs) default stateless filtering designed and implemented across one or more VPC subnets and protocols with secu rity concerns such as TFTP and NetBIOS VPC Security Group(s) Hypervisor level all inbound connections denied by default stateful filtering designed for one or more instances Provides whitelist abilities for ingress and egress traffic opening services and protocols required by the instance and applications Host based Protection Customer selected software to provide intrusion detection and prevention and firewall and/or logging capabilities Depending on product implemented can provide scalable protec tion and detection capabilities and security behavior visibility across your virtual fleet Because VPC networking configuration is critical to ensure the confidentiality integrity and availability of an application enterprises should define standards that adhere to security and AWS best practices MSOs should follow these standards or in the case of decentralized deployment workload owners should have a blueprint to follow when building a VPC structure Resource Tagging Almost all AWS resources allow the addition of user defined tags These tags are metadata and are irrelevant to the functionality of the resource but are critical for cost management and access control When multiple groups of users or multiple workload owners exist within the same AWS account restricting access to resources based on tagging is important Regardless account structure tagbased IAM policies can be used to place extra security restrictions on critical resources The following example of an IAM policy specifies a condition that restricts an IAM user to changing the state of an EC2 instance that has the resource tag of “project = 12345 ” { "Version": "20121017" "Statement": [ { "Action": [ "ec2:StopInstances" ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 16 of 39 AWS recommends the following to effectively use resource tagging:  Establish tagging baselines that define common keys and expected values across all accounts  Implement tag enforcement through both auditing and automation methods  Use automated deployment with AWS CloudFormation to automatically tag resources AMI Configuration Organizations commonly ensure security and compliance by centrally providing workload owners with prebuilt Amazon Machine Images (AMIs) These “golden ” AMIs can be preconfigured with hostbased security software and be hardened based on predetermined security guidelines Workload owners and developers can then use the AMIs as starting images on which to install their own software and configuration knowing the images are already compliant Note that managing centrally distributed AMIs can be an involved task for any central team Do not customize software and configuration which are likely to "ec2:RebootInstances" "ec2:TerminateInstances" ] "Condition": { "StringEquals": { "ec2:ResourceTag/project":"12345" } } "Resource": [ "arn:aws:ec2:your_region:your_account_ID:instance/*" ] "Effect": "Allow" } ] } ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 17 of 39 change frequently in an AMI; instead configure them by using Amazon Elastic Compute Cloud (Amazon EC2) user data scripts or automation tools such as Chef Puppet or AWS OpsWorks Figure 3: Differences Between FullyCconfigured and Base AMIs Figure 3 shows how preconfigured AMIs can be used through automation and policy as the standard to control which new EC2 instances are deployed by workload owners Building AMIs can be partially automated by using tools such as Aminator and Packer11 Continuous Monitoring Continuous monitoring is the proactive approach of identifying risk and compliance issues by accurately tracking and monitoring system activity Certain compliance standards such as NIST SP 80053 require continuous monitoring to meet specific security controls AWS includes several services and native capabilities that can facilitate a continuous monitoring solution in the cloud AWS CloudTrail AWS CloudTrail is a service that logs API activity within an AWS account and delivers these logs to an Amazon Simple Storage Service (Amazon S3) bucket This data can be analyzed with thirdparty tools such as Splunk Alert Logic or CloudCheckr12 As a security standard CloudTrail should be enabled on all accounts and should log to a bucket that is accessible by security tools and applications ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 18 of 39 Amazon CloudWatch Alarms Amazon CloudWatch alarms notify users and applications when events related to AWS resources occur For example the failure of an instance can trigger an alarm to send an Amazon Simple Notification Service (Amazon SNS) notification by email to a group of users You can create common alarms for metrics and events within an account that must be monitored Centralized Logging In AWS application logs can be centralized for analysis by security tools This can be simplified by using Amazon CloudWatch Logs CloudWatch Logs provides an agent which can be configured to send application log data directly to CloudWatch Metric filters can then be used to track certain events and activity at the OS and application levels Notifications Amazon SNS can be used to send email or SMSbased notifications to administrative and security staff Within an AWS account you can create Amazon SNS topics to which applications and AWS CloudFormation deployments can publish These push notifications can automatically be sent to individuals or groups within the organization w ho need to be notified of Amazon CloudWatch alarms resource deployments or other activity published by applications to Amazon SNS AWS Config AWS Config is a service that provides you with an AWS resource inventory a configuration history and configuration change notifications all of which enable security and governance13 AWS Config allows detailed tracking and notification whenever a resource in an AWS account is created modified or deleted The Shared Services VPC Our enterprise customers have found that establishing a single Amazon VPC that contains security applications required for monitoring their applications simplifies centralized control of infrastructure and provides easier access to common features such as Network Time Protocol (NTP) servers directory services and certificate management repositories ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 19 of 39 Figure 4: A Sample SharedService Amazon VPC Approach for DoD Customers Figure 4 provides an example of a shared service VPC approach used by a DoD MSO that establishes two VPCs for use by all of their applications In the first VPC the MSO established a VPC dedicated to providing a web application firewall that screens all traffic for known attack patterns creates a single point for monitoring web traffic and yet does not create a singlepoint of failure due to its ability to scale with traffic In the second VPC the MSO hosts a variety of common services including Active Directory servers DNS servers NTP servers HostBased Security System (HBSS) ePolicy Orchestrator (ePO) rollup servers and a master Assured Compliance Assessment Solution (ACAS) Security Center server Each organization must determine the common services that they must host in their AWS environment to support the needs of workload owners ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 20 of 39 Automating for Compliance Any customer can create prebuilt and customizable reference architectures with the tools AWS provides although it does require a level of effort and expertise Automation Methods AWS CloudFormation is the core of AWS infrastructure automation The service allows you to automatically deploy complete architectures by using prebuilt JSONformatted template files The set of resources created by an AWS CloudFormation template is referred to as a “stack ” Modular Design for Compliance Automation When building enterprisewide AWS CloudFormation templates to automate compliance we recommend that you use a modular design Use separate stacks based on the commonality of configuration among applications This can automate and enforce the baseline standards for security and compliance described in the previous sections Figure 5 shows how a customer can develop and maintain AWS CloudFormation templates using a modular design A single workload would use one template from each of these stacks nested in a single template to deploy and configure an entire application ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 21 of 39 Figure 5: AWS CloudFormation Stacks Stack 1 – Stack 1 is the primary security template applied to each account; it deploys common IAM users roles groups and associated policies Stack 2 – Generally there will be a template for each common use case to deploy the associated VPC architecture; this can take into account connectivity options such as VPC peering NAT instances internet and virtual private gateways Stack 3 – There is a template for each common configuration of an application architecture They contain applicationrelated components that are common among multiple applications but distinct among use cases such as elastic load balancers Elastic Load Balancing SSL configuration common security groups and common S3 buckets Stack 4 – There is a template for each specific application that deploys the associated EC2 instances autoscaling groups and other instancelevel resources In this stack instances can be bootstrapped with required user data and other resources such as applicationspecific security groups can be created Use Case Packages Building templates in this manner allows you to reuse configurations For specific use cases and application types you can use “packages ” that consist of ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 22 of 39 multiple templates nested within a single main template to deploy an entire architecture as shown in Figure 6 Figure 6: Example Package That Includes IAM Base Configuration VPC Architecture 1 Application Architecture 2 and APP2 Template An organization with a decentralized cloud governance model can use this automation structure to establish “blueprint ” architectures and allow workload owners full control of deployment at all levels In contrast an organization with a centralized cloud team that is responsible for provisioning might allow workload owners to provision only the applicationlevel components of the architecture while retaining responsibility for initial account provisioning IAM controls and Amazon VPC configuration To successfully build templates to automate compliance:  Keep templates modular; use nested stacks when possible  Use parameters as much as necessary to ensure flexibility  Use the DependsOn attribute and wait conditions to prevent dependency issues when resources are deployed  Develop a version control process to maintain template packages ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 23 of 39  Allow for command line interface (CLI)based or AWS Service Catalog based deployment  Use a parameters file  Use IAM policies to restrict the ability of users to delete AWS CloudFormation stacks Automating Compliance for EC2 Instances There are four tools for automating the configuration of EC2 instances at the operating system and application levels to meet compliance requirements Custom AMIs AWS allows you to create customized AMIs that can be built and hardened for use by workload owners to further install software and applications Building a compliant AMI may requires you to take into account the following:  Software packages and updates  Password policies  SSH keys  File system permissions/ownership  File system encryption  User/group configuration  Access control settings  Continuous monitoring tools  Firewall rules  Running services User Data Scripts You can employ user data to bootstrap EC2 instances to install packages and perform configuration on launch Utilize user data to directly manipulate instance configuration with any of the following tools: ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 24 of 39  CloudInit directives – Specify configuration parameters in user data which cloudinit can use to directly modify configuration An example of a directive is “Packages ” which can install a list of specific packages on the instance  Shell scripts – Include Bash or PowerShell scripts directly in user data to run on instance launch There is a 16 KB raw data limit on user data which limits this option  External scripts – A user data script can pull down a larger shell script from an S3 bucket URL or any other location and run this script to further configure the instance Configuration Management Software Configuration management solutions allow continuous management of instance configuration This can automate consistency among instances and make managing changes easier Examples of such solutions include:  Chef  Puppet  Ansible  SaltStack  AWS OpsWorks By using these configuration management solutions you can build scripts and packages to secure an operating system These hardening operations can include modifying user access or file system permissions; disabling services; making firewall changes; and many other operations used to secure a system and reduce its attack surface The following example of a Chef script implements a password age policy: template '/etc/logindefs' do source 'logindefserb') mode 0444 owner 'root' group 'root' ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 25 of 39 You can design packages of configuration scripts for example Puppet modules or Chef cookbooks based on specific compliance requirements and apply them to instances that must meet those requirements Containers Containerization with applications such as Docker14 or Amazon EC2 Container Service (Amazon ECS)15 allows one or more applications to run independently on a single instance within an isolated user space Figure 7: Containerization From a compliance perspective containers can be prebuilt with a standardized and hardened configuration based on the operating system and application Development & Management Using a modular approach and a common structure for templates simplifies updates and enforces uniform development by those responsible for creating new use case packages We recommend using the following elements when developing and managing AWS CloudFormation template packages that are architected for compliance Outputs The Output section of a template can include custom information and can be used to retrieve the ID of generated resources when nested stacks are used It variables (password_max_age: node['auth']['pw_max_age'] password_min_age: node['auth']['pw_min_age'] ) end ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 26 of 39 can also be used to provide general information that can be viewed from the AWS CloudFormation console or from the CLI/API describestack s call The Output sections of template files should include at minimum the following reference information:  Use case/application type  Compliance type  Date created  Maintained by Parameters AWS CloudFormation parameters16 are fields that allow users to specify data to the template upon launch Use parameters whenever possible You can design an entire set of AWS CloudFormation templates for a common use case by using highly customized parameters For example most tiered web applications share a similar architecture For this type of use case you can develop a complete fourstack template package so that multiple webbased applications can easily be deployed with the same template files by the user specifying parameters for AMIs and other applicationspecific resources Conditions AWS CloudFormation allows the use of Conditions17 which must be true for resources to be created When used in combination with parameters conditions enable you to design templates that make reference architectures flexible and based on application requirements For example a condition can be used to launch an EC2based database instead of an Amazon Relational Database Service (Amazon RDS) instance based on input parameters specified by the user as shown in the following snippet: "CreateDBInstance": { ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 27 of 39 Custom Resources AWS CloudFormation allows you to create custom resources18 which can be used to integrate with external processes or thirdparty providers Custom resources can also be designed to invoke AWS Lambda functions which can provide levels of automation not available with AWS CloudFormation alone Figure 8: Custom Resources Infrastructure as Code AWS CloudFormation templates and associated scripts documents and parameter files can be managed just as any application code would be We recommend that you use version control repositories such as Git or Subversion (SVN) to track changes and allow multiple users to efficiently push updates Capabilities such as version control testing and rapid deployment are possible with AWS CloudFormation templates just as with any source code A full Continuous Integration/Continuous Deployment (CI/CD) solution can be implemented using additional tools such as Jenkins19 "Fn::Not": [ { "Fn::Equals": [ { "Ref": "DatabaseAmi" } "none" ] } ] } ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 28 of 39 Figure 9: Example of CI/CD in AWS Using AWS CloudFormation You can store prebuilt use case packages in either a source code repository or in an S3 bucket This allows provisioning teams and workload owners to easily pull down the latest versions of these files Deployment To ensure a secure reliable and efficient deployment of prebuild template packages you should consider implementing several operational practices as described in the following sections AWS CLI Although you can use the AWS CloudFormation console to deploy templates from a webbased interface there are clear advantages to using the AWS CLI and other automated methods – especially if the templates require input to many parameters The AWS CLI is automatically installed on the Amazon Linux AMI You can use the AWS CLI to deploy automated architectures with a single command from an EC2 Linux instance Including a parameters file simplifies inputting template parameters by eliminating the need to manually input data for each field ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 29 of 39 You can use an additional script as a wrapper to simplify the CLI command or alternatively to directly call the AWS CloudFormation API to create the stack Launch EC2 instances into a predefined IAM role that allows access only to the AWS CloudFormation API To provide “least privilege ” within the AWS CloudFormation service use additional restrictions To launch a template from the AWS CLI: 1 Create an IAM role that allows an EC2 instance to access the AWS CloudFormation API 2 Launch an EC2 instance into the IAM role in a VPC (preferably a shared services VPC) 3 Copy or download the template package to the EC2 instance 4 Run the AWS CLI aws cloudformation create stack command to launch the template stack Security The security of AWS CloudFormation template packages should always be considered especially by customers who must adhere to strict compliance requirements Source code repositories should be secured to allow write access only to those responsible for updating packages In addition user names passwords and access keys should never be included in user data when automating deployment of EC2 instances because they are unencrypt ed plain text It is critical to understand that deleting an AWS CloudFormation stack actually deletes all underlying resources effectively destroying all data stored in EC2 aws cloudformation createstack stackname myStack template body file:///tem platejson parameters file:///parameters_filejson capabilities CAPABILITY_IAM ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 30 of 39 To mitigate the risk of accidental resource deletion use the following safeguards IAM permissions20 Restrict the ability to delete AWS CloudFormation stacks to only users groups and roles that require that ability You can write IAM policies that deny users and groups to which those policies are applied the ability to delete any stack The following is an example of an IAM policy that denies the DeleteStack and UpdateStack API calls: Deletion Policy21 Resources such as S3 buckets and EC2 and RDS instances support the AWS CloudFormation DeletionPolicy attribute Use this attribute to require that resources be retained upon stack deletion or that a snapshot be created (if snapshots are supported) The following is an example of a deletion policy with an S3 bucket AWS CloudFormation resource: { "Version":"2012 1017" "Statement":[{ "Effect":"Deny" "Action":[ "cloudformation:DeleteStack" "cloudformation:Updat eStack" ] "Resource":"*” }] } "myS3Bucket" : { "Type" : "AWS::S3::Bucket" ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 31 of 39 Auditing Automating architecture deployment in AWS can help simplify the process of auditing and accrediting deployed applications Having a base configuration for components such as IAM and VPC controls ensures that workload owners are deploying architectures based on compliance standards Security personnel at the customer’s MSO can “sign off ” on reusable template packages that are based on customer security standards and compliance requirements as compliant The security accreditation and auditing process can make use of automation with the following AWS capabilities:  Tagging –AWS resources can be queried for common tags Tags can be applied at the sta ck level to all resources that support tagging  Template validation –A scripted validation of the configuration can be tested against the AWS CloudFormation template files prior to deployment  SNS notification –A nested stack in a template can be configured to send notifications about stack events to an Amazon SNS topic These Amazon SNS topics can be used to alert individuals groups or applications that a specific template has been deployed in the account  Testing deployed resources –Through the AWS API scripted tests can be conducted to validate that deployed architectures meet security requirements For example tests can be run to detect if any security group has open access to certain ports or if there is an internet gateway in a VPC that should not have one  ISV solutions –Thirdparty solutions for analyzing deployed architectures are available from AWS Partners Security control validation can also be implemented through solutions such as Telos’ Xacta risk management solution "DeletionPolicy" : "Retain" } ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 32 of 39 AWS Service Catalog Integration AWS Service Catalog allows IT administrators to create and manage approved catalogs of resources which are called products IT administrators create portfolios of one or more products which they can then distribute to AWS end users and workload owners End users can access products through a personalized portal22 Product – Products can be created to provide specific types of applications or to address specific use cases or alternatively they can be used to deploy base resources such as IAM and VPC configuration which other resources such as EC2 instances can utilize Template package deployment can be further automated and simplified by making the template package an AWS Service Catalog product Portfolios – A portfolio consists of one or more products Portfolios can include products for different types of use cases and can be organized by compliance type Permissions – End users and workload owners who are IAM users or members of IAM groups or roles can be given permission to use specific portfolios based on the level of access they need and what they need to deploy Constraints – Constraints are a granular control applied at a portfolio or product level that restrict the ways that AWS resources can be deployed Constraints can be used to allow templates to deploy all resources at a higher level of access than a workload owner has through IAM policies Tags – Tags can be used to control access to resources or for cost allocation Tags are enforced at the portfolio or product level AWS Service Catalog allows sharing of portfolios that are created in a common shared services AWS account This allows central management of and access to deployable reference architectures Central Management of AWS Service Catalog Customers with centralized governance models can fully control and manage the AWS Service Catalog products that workload owners have access to ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 33 of 39 Figure 10: Using AWS Service Catalog Constraints Automating for Governance: HighLevel Steps Automating a compliant secure and reliable architecture that adheres to an organization’s governance model involves several basic steps This section presents a highlevel overview Prerequisites Before beginning to develop automated reference architectures based on compliance requirements your organization must define the following:  Cloud strategy and roadmap  Governance model  Cloud tasks roles and responsibilities  VPC and account creation strategy  Security standards and compliance requirements ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 34 of 39 Automating for compliance will often be part of a larger IT transformation initiative Many architectural requirements relate directly to existing governance and securityrelated decisions Step 1: Define Common Use Cases Customers must first determine t he standard use cases of their workloads Many applications deployed on AWS support a common use case These use cases share identical or similar base architectures for VPC design IAM configuration and other architectural components The following are examples of a few common use cases:  Web applications – Web applications normally consist of multiple tiers (proxy/web application and database) for hosting webbased applications accessed by end users These applications can be designed for scalability and elasticity when properly architected in AWS Different VPC configurations are required depending on whether the application is intended to be internal facing or accessible from users on the public Internet  Enterprise applications – Enterprise applications are almost always commercial offtheshelf (COTS) products that are used widely within an organization in critical tobusiness functions Examples include Microsoft SharePoint Active Directory PeopleSoft and Oracle EBusiness Suite Often each enterprise application addresses a specific use case with an architecture that is standardized  Data analytics – Applications that analyze large data sets have architectures that require the deployment of common data analytics applications and use AWS big data services such as Amazon Redshift Amazon Elastic MapReduce (Amazon EMR) Amazon Kinesis and Amazon DynamoDB (DynamoDB) ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 35 of 39 Step 2: Create and Document Reference Architectures A welldesigned reference architecture provides clear documentation on how resources will be used within AWS Reference architectures should be created in Visio PowerPoint or another platform from which they can be distributed Figure 11: Example Reference Architecture in PowerPoint Step 3: Validate and Document Architectu re Compliance Accurately documenting how the reference architecture satisfies compliance requirements can reduce the amount of effort required for a workload owner to ensure that the architecture being deployed meets compliance requirements Compliance documentation may include:  A security controls implementation matrix (SCTM) ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 36 of 39  A system security plan (SSP)  A concept of operations (ConOps) Organizations that must follow specific compliance controls should determine which resources components and configurations meet the requirements of each control Including this documentation in a packaged deployment reduces the need to repeat the same compliance analysis for a proposed architecture Figure 12: Example of a Security Controls Implementation Matrix Provided by the Cloud Security Alliance Step 4: Build Automated Solutions Based on Architecture There are many ways to automate infrastructure creation with AWS services and features Most commonly AWS CloudFormation templates are used to automate deployment and configuration of AWS resources Create template packages using the design guidelines provided in “Automating for Compliance ” earlier in this whitepaper When building templates determine which configurations are common among various types of applications and use cases Properly maintain and update templates when necessary ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 37 of 39 Step 5: Develop an Accreditation and Approval Process Existing processes and methods for evaluating systems against compliance requirements may not apply or may need to be changed for applications in the cloud When automating compliance for an entire enterprise involve security teams early on so they can provide input and gain a deeper understanding of how applications will be deployed in AWS The accreditation and approval plan for automated deployments should consider of all of the following:  The compliance standards that the organization must follow  The current approval process for applications and infrastructure  The existing security requirements related to networking continuous monitoring access control and auditing  The current (and proposed) tools for security analysis scanning and monitoring  The hardening requirements for deployed operating systems if there are any and the need for prehardened custom images  The processes and methods used to validate both architecture templates and deployed configurations Conclusion Developing an automated solution for governance and compliance can reduce the cost time and effort to deploy applications in AWS while minimizing risk and simplifying architecture design When this approach is packaged into a reusable solution it can decrease the level of effort to produce compliancerelated documentation and allow time normally spent evaluating compliant architectures to be used to drive the organization’s goals and mission ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 38 of 39 Contributors The following individuals and organizations contributed to this document:  Mike Dixon Consultant AWS Public Sector Sales  Lou Vecchioni Senior Consultant AWS ProServ  Brett Miller Senior Consultant AWS ProServ  Josh Weatherly Practice Manager AWS ProServ  Andrew McDermott Senior Compliance Architect AWS Security Notes 1 http://wwwgartnercom/itglossary/itgover nance/ 2 http://nvlpubsnistgov/nistpubs/SpecialPublications/NISTSP80053r4pdf 3 http://d0awsstaticcom/whitepapers/compliance/awsarchitectureand securityrecommendationsforfedrampcompliancepdf 4 http://iasedisamil/cloud_security/Documents/u cloud_computing_srg_v1r1_finalpdf 5 http://awsamazoncom/compliance/hipaacompliance/ 6 http://www27000org/iso27001htm 7 http://awsamazoncom/compliance/pcidsslevel1 faqs/ 8 http://mediaamazonwebservicescom/AWS_Security_at_Scale_Governance_i n_AWSpdf 9 http://awsamazoncom/partners/managedservice/ 10 https://mediaamazonwebservicescom/AWS_Security_at_Scale_Governance _in_AWSpdf 11 https://githubcom/Netflix/aminator https://wwwpackerio/intro/indexhtml ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 39 of 39 12 http://awsamazoncom/cloudtrail/partners/ 13 http://awsamazoncom/config/ 14 https://wwwdockercom / 15 http://awsamazoncom/ecs/ 16 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/paramet ers sectionstructurehtml 17 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/conditio ns sectionstructurehtml 18 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/aws resourcecfncustomresourcehtml 19 https://wikijenkinsciorg/display/JENKINS/AWS+Cloudformation+Plugin 20 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/using iamtemplatehtml 21 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/aws attributedeletionpolicyhtml 22 http://awsamazoncom/servicecatalog/
General
ITIL_Asset_and_Configuration_Management_in_the_Cloud
ArchivedITIL Asset and Configuration Management in the Cloud January 2017 This paper has been archivedThis paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchived© 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedContents Introduction 1 What Is ITIL? 1 AWS Cloud Adoption Framework 2 Asset and Configuration Management in the Cloud 3 Asset and Configuration Management and AWS CAF 5 Impact on Financial Management 5 Creating a Configuration Management Database 6 Managing the Configuration Lifecycle in the Cloud 8 Conclusion 9 Contributors 10 ArchivedAbstract Cloud initiatives require more than just the right technology They also must be supported by organizational changes such as people and process changes This paper is intended for IT service management (ITSM) professionals who are supporting a hybrid cloud environment that leverag es AWS It outlines best practices for asset and configuration management a key area in the IT Infrastructure Library ( ITIL) on the AWS cloud platform ArchivedAmazon Web Services – ITIL Asset and Configuration Management in the Cloud Page 1 Introduction Leveraging the experiences of enterprise customers who have successfully integrated their cloud strategy with their IT Infrastructure Library (ITIL)based service management practices this paper will cover:  Asset and Configuration Management in ITIL  AWS Cloud Adoption Framework (AWS CAF)  Cloudspecific Asset and Configuration Management best practices like creating a configuration management database What Is ITIL? The framework managed by AXELOS Limited defines a commonly used best practice approach to IT service management (ITSM) Although it builds on ISO/IEC 20000 which provides a “formal and universal standard for organizations seeking to have their ITSM capabilities audited and certified ”1 ITIL goes one step further to propose operational processes required to deliver the standard ITIL is composed of five volumes that describe the ITSM lifecycle as defined by AXELOS: Service Strategy Understands organizational objectives and customer needs Service Design Turns the service strategy into a plan for delivering the business objectives Service Transition Develops and improves capabilities for introducing new services into supported environments Service Operation Manages services in supported environments Continual Service Improvement Achieves incremental and large scale improvements to services Each volume addresses the capabilities that enterprises must have in place Asset and Configuration Management is one of the chapters in the Service Transition volume For more information see the Axelos website 2 ArchivedAmazon Web Services – ITIL Asset and Configuration Management in the Cloud Page 2 AWS Cloud Adoption Framework AWS CAF is used to help enterprises modernize ITSM practices so that they can take advantage of the agility security and cost benefits afforded by public or hybrid clouds ITIL and AWS CAF are compatible Like ITIL AWS CAF organizes and describes all of the activities and processes involved in planning creating managing and supporting modern IT services It offers practical guidance and comprehensive guidelines for establishing developing and running cloud based IT capabilities AWS CAF is built on seven perspectives: People Selecting and training IT personnel with appropriate skills defining and empowering delivery teams with accountabilities and service level agreements Process Managing programs and projects to be on time on target and within budget while keepi ng risks at acceptable levels Security Applying a comprehensive and rigorous method for describing the structure and behavior for an organization’s security processes systems and personnel Business Identifying analyzing and measuring the effectiveness of IT investments Maturity Analyzing defining and anticipating demand for and acceptance of plan ned IT capabilities and services Platform Defining and describing core architectural principles standards and patterns that are required for optimal IT capabilities and services Operations Transitioning operating and optimizing the hybrid IT environment enabling efficient and automated IT service management AWS CAF is an important supplement to enterprise ITSM frameworks used today because it provides enterprises with practical operational advice for implementing and operating ITSM in a cloudbased IT infrastructure For more information see AWS Cloud Adoption Framework 3 ArchivedAmazon Web Services – ITIL Asset and Configuration Management in the Cloud Page 3 Asset and Configuration Management in the Cloud In practice asset and configuration management aligns very closely to other ITIL processes such as incident management change management problem management or servicelevel management ITIL defines an asset as “any resource or capability that could contribute to the delivery of a service” Examples of assets include:  virtual or physical storage  virtual or physical servers  a software license  undocumented information known to internal team members ITIL defines configuration items as “an asset that needs to be managed in order to deliver an IT service” All configuration items are assets but many assets are not configuration items Examples of configuration items include a virtual or physical server or a software license Every configuration item should be under the control of change management The goals of asset and configuration management are to:  Support ITIL processes by providing accurate configuration information to assist decision making (for example the authorization of changes the planning of releases) and to help resolve incidents and problems faster  Minimize the number of quality and compliance issues caused by incorrect or inaccurate configuration of services and assets  Define and control the components of services and infrastructure and maintain accurate configuration information on the historical planned and current state of the services and infrastructure The value to business is: ArchivedAmazon Web Services – ITIL Asset and Configuration Management in the Cloud Page 4  Optimization of the performance of assets improves the performance of the service overall For example i t mitigates risks caused by service outages and failed licensing audits  Asset and configuration management provides an accurate representation of a service release or environment which enables: o Better planning of changes and releases o Improved incident and problem resolution o Meeting service levels and warranties o Better adherence to standards and legal and regulatory obligations (fewer nonconformances) o Traceable changes o The ability to identify the costs for a service The following diagram from AXELOS shows there are elements in asset and configuration management that directly relate to elements in change management Asset and configuration management underpins change management Without it the business is subject to increased risk and uncertainty Figure 1: Asset and configuration management in ITIL ArchivedAmazon Web Services – ITIL Asset and Configuration Management in the Cloud Page 5 Asset and Configuration Management and AWS CAF As with most specifications covered in the Service Transition volume of ITIL asset and configuration management falls into the Cloud Service Management function of the AWS CAF Operations perspective People and process changes should be supported by a cloud governance forum or Center of Excellence whose role is to use AWS CAF to manage through the transition From the perspective of ITSM your operations should certainly have a seat at the table As shown in Figure 2 AWS CAF accounts for the management of assets and configuration items in a hybrid environment Information can come from the onpremises environment or any number of cloud providers (private or public) Figure 2: AWS CAF integration Impact on Financial Management One of the most important aspects of asset management is to ensure data is available for these financial management processes:  Capitalization and depreciation  Software license management ArchivedAmazon Web Services – ITIL Asset and Configuration Management in the Cloud Page 6  Compliance requirements These activities typically require comprehensive asset lifecycle management processes which take significant cost and effort One of the benefits of moving IT to the cloud is that the financial nature of the transaction moves from a capital expenditure (CAPEX ) to an operating expenditure (OPEX ) You can do away with the large capital outlays (for example a server refresh) that require months of planning as well as amortization and depreciation Creating a Configuration Management Database A configuration management database (CMDB) i s used by IT to track and manage its resources The CMDB presents a logical model of the enterprise infrastructure to give IT more control over the environment and facilitate decisionmaking At a minimum a CMDB contains the following:  Configuration item (CI) records with all associated attributes captured  A relationship model between different CIs  A history of all service impacts in the form of incidents changes and problems In a traditional IT setup the goals of establishing a CMDB are met through the process of:  Discovery tools used to create a record of existing CIs  Comprehensive change management processes to keep track of creation and updates to CIs  Integration of incident and problem management data with impacted CIs with ITSM workflow tools like BMC HewlettPackard or ServiceNow These processes and tools in turn help organizations better understand the IT environment by providing insight into not only the impact of incidents problems and changes but also financial resources service availability and capacity managemen t There are some challenges to creating a CMDB for cloud resources due to: ArchivedAmazon Web Services – ITIL Asset and Configuration Management in the Cloud Page 7  The inherent dynamic nature of cloud resource provisioning where resources can be created or terminated through predefined business policies or application architecture elements like auto scaling  The difficulty of capturing cloud resources data in a format that can be imported and maintain ed in a single system of record for all enterprise CIs  A prevalence of shadow IT organizations that makes information sharing and even manual consolidation of enterprise IT assets and CIs difficult Configuration Management Inventory for Cloud Resources There are two logical approaches AWS customers can take to create a CMDB for cloud resources: Figure 3: Options for Enterprise CMDB Systems AWS Config helps customers manage their CIs i n the cloud AWS Config provides a detailed view of the configuration of AWS resources in an AWS account With AWS Config customers can do the following:  Get a snapshot of all the supported resources associated with an AWS account at any point in time  Retrieve the configurations of the resources  Retrieve historical configurations of the resources ArchivedAmazon Web Services – ITIL Asset and Configuration Management in the Cloud Page 8  Receive a notification whenever a resource is created modified or deleted  View relationships between resources This information is important to any IT organization for CI discovery and recording change tracking audit and compliance and security incident analysis Customers can access this information from the AWS Config console or programmatically extract it into their CMDBs As an example of the potential for integration with legacy systems ServiceNow the platform asaservice (PaaS) provider of enterprise service management software is now integrated with AWS Config This means ServiceNow users can leverage Option 1 shown in Figure 3 Managing the Configuration Lifecycle in the Cloud One of the goals of service asset and configuration management is to manage the CI lifecycle and track and record all changes One of the key aspects of the cloud is a much tighter integration of the software and infrastructure configuration lifecycles This section covers aspects of configuration lifecycle management across instance stacks and applications:  Instance creation templates : Every IT organization has security and compliance standards for instances introduced into its IT environments Amazon Machine Images (AMIs) are a robust way of standardizing instance creation Users can opt for AWS or thirdparty provided predefined AMIs or define custom AMIs If you create AMI templates for instance provisioning you can define instance configuration and environmental addins in a predefined and programmatic manner A typical custom AMI might prescribe the base OS version and associated security monitoring and configuration management agents  Instance lifecycle management : For every instance or resource created in an IT environment there are multiple lifecycle management activities that must be performed Some of the standard tasks are patch management hardening policies version upgrades environment variable changes and so on These activities can be performed manually but most IT organizations use robust configuration management tools like Chef Puppet and System Center Configuration Manager to perform ArchivedAmazon Web Services – ITIL Asset and Configuration Management in the Cloud Page 9 these tasks AWS allows easy integration with these tools to ensure a consistent enterprise configuration management approach  Environment provisioning templates : AWS CloudFormation is useful for provisioning end toend environments (also referred to as stacks ) in a consistent and repeatable fashion without actually provisioning each component individually You don’t need to figure out the order for provisioning AWS services or the subtleties of making those dependencies work AWS CloudFormation takes care of this for you You can use a template to create identical copies of the same stack without effort or errors Templates are simple JSONformatted text files that can be held securely leveraging your current source control mechanisms  Application configuration and lifecycle management : In today’s world of agile development development teams leverage continuous integration and continuous delivery best practices AWS provides seamless integration with tools like Jenkins (CI) and Github for code management and deployment Services like AWS CodePipeline AWS CodeDeploy and AWS CodeCommit can be used to manage the application lifecycle Conclusion Service asset and configuration management processes consist of critical activities for the provisioning and maintenance of the health of IT systems Consistent management of configuration items through their lifecycle leads to efficient and effective system health and performance AWS enables best practices across every level of resource in an application stack With the tools automations and integration available on the AWS platform IT organizations can achieve significant productivity gains Successful implementation an d execution of service asset and configuration management processes should be seen as a shared responsibility that can be achieved through the right commitment by IT organizations enabled by the AWS platform ArchivedAmazon Web Services – ITIL Asset and Configuration Management in the Cloud Page 10 Contributors The following individuals contributed to this document:  Darren Thayre Transformation Consultant AWS Professional Services  Anindo Sengupta Chief Operating Officer Minjar Cloud Solutions 1 ITIL Service Operation Publication AXELOS 2007 page 5 2 https://wwwaxeloscom/bestpracticesolutions/itil/what isitil 3 http://awsamazoncom/professionalservices/CAF/ Notes
General