Sequence
int64 1
5
| Type
stringclasses 1
value | Title
stringclasses 5
values | Data
stringclasses 5
values |
|---|---|---|---|
1
|
whitepapppr
|
best-practices-cyber-security-testing
|
VIEW POINT
BEST PRACTICES TO ENSURE SEAMLESS
CYBER SECURITY TESTING
Abstract
In a post COVID-19 world, the need to become digitally-enabled is more
pressing than ever before. Enterprises are accelerating digital strategies and
omni-channel transformation projects. But while they expand their digital
footprint to serve customers and gain competitive advantage, the number
and extent of exposure to external threats also increases exponentially.
This is due to the many moving parts in the technology stack such as cloud,
big data, legacy modernization, and microservices. This paper looks at the
security vulnerabilities in open systems interconnection (OSI) layers and
explains the best practices for embedding cyber security testing seamlessly
into organizations.
Introduction
Open systems interconnection (OSI) comprises many layers, each of which has its own services/protocols. These can be used by hackers and
attackers to compromise the system through different types of attacks.
OSI Layers
Application Layer
Presentation Layer
Session Layer
Transport Layer
Network Layer
Data link Layer
Physical Layer
Services/Protocols
File transfer protocol, simple mail
transfer protocol, Domain Name System
Data representation, Encryption and
Decryption
Establishing session communications
Types of attacks
SQL injecon, Cross-site scripng so ware a ack
(persistent and non-persistent), Cross-site request
forgery, Cookie poisoning
SSL a acks, HTTP tunnel a acks
Session hijacking, sequence predicon a ack,
Authencaon a ack
Port scanning, ping flood and Distributed Denial-ofService (DDoS) a ack
TCP, UDP, SSL, TLS - protocols
Networks, IP address, ICMP protocol,
IPsec protocol, OSPF protocol
External a acks such as packet sniffing, Internet
Control Message Protocol (ICMP) flood a ack,
Ethernet, 802.11 protocol, LANs, Fiber
optic, Frame protocol
Denial of Service (DoS) a ack at Dynamic Host
Configuraon Layer (DHCP), MAC address spoofing
etc.- These are primarily internal a acks
Transmission media, bit stream (signal)
and binary transmission
Data the , Hardware the , Physical destrucon,
Unauthorized access to hardware/connecons etc.,
Fig 1: Points of vulnerability across OSI layers
For some OSI layers like Transport, Session,
Presentation, and Application, some
amount of exposure can be controlled
using robust application-level security
practices and cyber security testing. From
a quality engineering perspective, it is
4. Understanding the vulnerabilities in
infrastructure security testing
5. Understanding roles and responsibilities
for cloud security testing
While there is no single approach to handle
Best practice 1: Defining and
executing a digital tester’s
role in the DevSecOps model
cyber security testing, the following five
DevSecOps means dealing with security
important for testers to be involved in the
digital security landscape.
best practices can ensure application
security by embedding cyber security
testing seamlessly into organizations:
1. Defining and executing a digital tester’s
aspects as code (security as a code). It
enables two aspects, namely, ‘secure code’
delivered ‘at speed’. Here is how security-asa-code works:
role in the DevSecOps model
• Code is delivered in small chunks.
2. Understanding and implementing
Possible changes are submitted in
data security testing practices in nonproduction environments
3. Security in motion – Focusing on
dynamic application security testing
External Document © 2021 Infosys Limited
from SVN or GIT (version control
systems)
• Code is automatically pushed for
scanning after applying UI and serverbased pre-scan filters. Code is scanned
for vulnerability
• Results are pushed to the software
security center database for
verification
• If there are no vulnerabilities, the
code is pushed to quality assurance
(QA) and production stages. If
vulnerabilities are found, these are
backlogged for resolution
triggers scheduled scans in the build
DevSecOps can be integrated to
perform security tests on networks,
digital applications and identity access
management portals. The tests focus on
how to break into the system and expose
environment. Code checkout happens
vulnerable areas.
advance to identify vulnerabilities
• The application security team
Best practice 2:
Understanding and
implementing data security
testing practices in nonproduction environments
With the advent of DevOps and digital
transformation, there is a tremendous
pressure to provision data quickly to
meet development and QA needs. While
provisioning data across the developing
pipelines is one challenge, another is to
ensure security and privacy of data in the
non-production environment. There are
several techniques to do this as discussed
below:
• Dynamic data masking, i.e., masking
data on the fly and tying database
security directly to the data using tools
that have database permissions
• Deterministic masking, i.e., using
algorithm-based data masking of
sensitive fields to ensure referential
integrity across systems and databases
• Synthetically generating test data
without relying on the production
footprint by ensuring referential
integrity across systems and creating a
self-service database
• Automatic clean-up of the sample data,
sample accounts and sample customers
created
Best practice 3: Security in
motion – Focus on dynamic
application security testing
This test is performed while the application
is in use. Its objective is to mimic hackers
and break into the system. The focus is to:
• Identify abuse scenarios by mapping
security policies to application
flows based on the top 10 security
vulnerabilities for Open Web Application
Security Project (OWASP)
• Conduct threat modeling by
• Define a security validation strategy
decomposing applications, identifying
based on the type of cloud service
threats and categorizing/rating threats
models:
• Perform a combination of automated
• For Software-as-a-Service (SaaS),
testing and black-box security/
the focus should be on risk-based
penetration testing to identify
security testing and security audits/
vulnerabilities
compliance
• For Platform-as-a-Service (PaaS),
Best practice 4:
Understanding the
vulnerabilities in
infrastructure security testing
There are infrastructure-level
vulnerabilities that cannot be identified
with UI testing. Hence, infrastructure-level
exploits are created and executed, and
the focus should be on database
security and web/mobile/API
penetration testing
• For Infrastructure-as-a-Service
(IaaS), the focus should be on
infrastructure and network
vulnerability assessment
• Conduct Cloud Service Provider
reports are published. The following steps
(CSP) service integration and cyber
give insights to the operations team to
security testing. The focus is on
minimize/eliminate vulnerabilities at the
identifying system vulnerabilities, CSP
infrastructure layer:
account hijacking, malicious insiders,
• Reconnaissance and network
vulnerability assessment including
host fingerprinting, port scanning and
network mapping tools
• Identification of services and OS details
on hosts such as Domain Name System
identity/access management portal
vulnerabilities, insecure APIs, shared
technology vulnerabilities, advanced
persistent threats, and data breaches
• Review the CSP’s audit and perform
compliance checks
(DNS) and Dynamic Host Configuration
These best practices can help enterprises
Protocol (DHCP)
build and create secure applications right
• Manual scans using scripting engine and
tool-based automated scans
• Configuration reviews for firewalls,
routers, etc.
• Removal of false positives and validation
of reported vulnerabilities
from the design stage.
Infosys has a dedicated Cyber Security
Testing Practice that provides trusted
application development and
maintenance frameworks, security
testing automation, security testing
planning, and consulting for emerging
areas. It aims to integrate security
Best practice 5:
Understanding roles and
responsibilities for cloud
security testing
With cloud transformation, cloud security
is a shared responsibility. Cloud security
into the code development lifecycle
through test automation with immediate
feedback to development and operations
teams on security vulnerabilities. Our
approach leverages several open-source
and commercial tools for security testing
instrumentation and automation.
testing must involve the following steps:
External Document © 2021 Infosys Limited
Conclusion
The goal of cyber security testing is to
anticipate and withstand attacks and
recover quickly from security events. In
the current pandemic scenario, it should
also help companies adapt to short-term
change. Infosys recommends the use
of best practices for integrating cyber
security testing seamlessly. These include
building secure applications, ensuring
proper privacy controls of data in rest
and in motion, conducting automated
penetration testing, and having clear
security responsibilities identified with
cloud service providers.
About the Authors
Arun Kumar Mishra
Sundaresasubramanian Gomathi Vallabhan
Senior Practice Engagement Manager, Infosys
Practice Engagement Manager, Infosys
References
1. https://www.marketsandmarkets.com/Market-Reports/security-testing-market-150407261.html
2. https://www.infosys.com/services/validation-solutions/service-offerings/security-testing-validation-services.html
For more information, contact askus@infosys.com
© 2021 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys
acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this
documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the
prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document.
Infosys.com | NYSE: INFY
Stay Connected
|
2
|
whitepapppr
|
solving-accelerate-digital-transformation
|
WHITE PAPER
SOLVING THE TEST DATA
CHALLENGE TO ACCELERATE DIGITAL
TRANSFORMATION
Abstract
Organizations are increasingly adapting to the need to deliver products and
services faster while continuously responding to market changes.
In the age of mobile apps, test automation is not new. But traditional test
data management (TDM) approaches are unable to help app development
teams address modern delivery challenges. Companies are increasingly
struggling to keep up with the pace of development, maintain quality of
delivery, and minimize the risk of a data breach.
This white paper illustrates the need for a smart, next-gen TDM solution to
accelerate digital transformation by applying best practices in TDM, zerotrust architecture, and best-in-class test data generation capabilities.
External Document © 2021 Infosys Limited
Table of Contents
Traditional Test Data Management..........................................................................................4
Why the New Normal was not Enough?..................................................................................4
Five Key Drivers and Best Practices in Test Data Management..........................................5
Future-proof Test Data through Next-gen TDM Innovation...............................................8
Accelerate through Next-gen TDM Reference Architecture................................................9
The Way Forward - Building Evolutionary Test Data for your Enterprise....................... 11
About the Authors.................................................................................................................... 12
Table of Figures
Figure 1. Key focus areas emerging in test data management..........................................4
Figure 2. Key drivers and best practices in TDM....................................................................5
Figure 3. Zero trust architecture...............................................................................................6
Figure 4. Stakeholder experience.............................................................................................7
Figure 5. Focus areas of Infosys Next-Gen TDM.....................................................................8
Figure 6. Infosys Next-Gen TDM reference architecture......................................................9
Figure 7. Contextual test data and its different formats................................................... 10
External Document © 2021 Infosys Limited
Traditional Test Data Management
Test data management (TDM) should ensure that test data is of the highest possible quality and available to users. In the digital age,
managing test data using traditional TDM practices is challenging due to its inability to accelerate cloud adoption, protect customer data,
provide reliable data, avoid data graveyards, ensure data consistency, and automate and provision test data.
Why the New Normal was not Enough?
While the ‘new normal’ has become a catchword in 2021, in the world of testing, this ‘normal’ was not effective for many organizations. The
pressure to adapt to changing customer expectations, new technology trends, changing regulatory norms, increased cybersecurity threats,
and scarcity of niche skills has raised many challenges for organizations. In light of this, many are wondering whether they should revisit their
test data strategy.
Changing Customer
Expectations
New Business Models
- Multi-cloud
environment
Head winds – New
Tech Trends
Data Privacy and
Security
APPS
Regulatory
Changes
Employees
Hyper productivity - Agile
Cyber Security
Threats
Partners
Customers
Community
New Digital
Workplace
Skill Scarcity
Figure 1. Key focus areas emerging in test data management
As time-to-market for products and
services becomes critical, test data
generation and provisioning emerge as
bottlenecks to efficiency. Further, test data
management has been represented as
the weak link for organizations looking to
accelerate digital transformation through
continuous integration and delivery. High
quality test data is a prerequisite to train
machine learning (ML) models for accurate
business insights and outcome predictions.
External Document © 2021 Infosys Limited
To build a competitive difference,
organizations today are investing in three key
focus areas in test data management (refer
Figure 1):
• New business models – With a strong focus
on customer experience, organizations
must adopt new business models and
accelerate innovation. There is a need to
generate data that can be controlled and
is realistic as well as accurate to meet realworld production needs.
• Hyper-productivity – Automation
and iterative agile processes push the
need for better testing experiences
with faster and more efficient data
provisioning, allowing organizations to
do more with less.
• New digital workplace – Millions of
employees are working from home.
Organizations must focus on building
a secure, new-age digital workplace to
support remote working.
Five Key Drivers and Best
Practices in Test Data
Management
Companies are increasingly struggling to
keep up with the pace of development,
maintain quality of delivery, and achieve
absolute data privacy. On-demand
synthetic test data is a clear alternative to
the traditional approach of sub-setting,
masking, and reserving production data
for key business analytics and testing. In
this context, three key questions to ask are:
Next-gen TDM
1. What are the drivers and best practices
to be considered while building a test
data strategy?
Accelerate
2. How can CIOs decide what is the right
direction for their test data strategy?
Self-serviced data provisioning |
3. What are the trade-offs in test data
management?
There are five elements – cost, quality,
security and privacy, tester experience, and
data for AI – that drive a successful test
data management strategy. Understanding
the best-practices around these will guide
CIOs in making the right decision.
Adapting for agile & devOps |
Increased test data automation
Test data automation
Figure 2. Key drivers and best practices in TDM
External Document © 2021 Infosys Limited
Key Drivers
Impact on Test Data Strategy
What is the return on investment (ROI) and
acceptable investment to create, manage,
process, and, most importantly, dispose of test
data?
1. Cost
Production data must be collected, processed,
retained, and disposed of. The processing and
storage cost must offset the investment in TDM
products. Procurement, customization, and
support costs need to be considered.
Do we have the right quality of data? Can we
get complete control over the data? Can we
generate test data in any format?
Testers have very limited control over the data
2. Quality
provided by production. The test data is usually
a subset of data from production and cannot
cater to all the use cases including negative and
other edge use cases. Further, there is a need to
generate electronic data interchange (EDI) files,
images, and even audio files for some of the use
cases.
Best Practices
• Test data as a service – Test data on cloud with a subscription for
testers can lower the provisioning of full-scale TDM.
• TDM suite can help build a subset of data designed with realistic and
referentially intact test data from across the distributed data sources
with minimal cost and administrative effort.
• Synthetic data generators should have the breadth to cover key data
types and file formats along with the ability to generate high-quality
data sets, whether structured or unstructured, across images, audio
files, and file formats.
• Zero trust architecture provides a data-first approach, which is secure
by design for each workload and identity-aware for every persona in
the test management process including testers, developers, release
managers, and data analysts.
Do we have the right data privacy controls while
accessing data for testing? How do we handle a
data privacy breach?
Analyze
Discover
Data Masking
Data Generation
Production
Data Copy / Sub-Set
Validate
Export & Refresh
Virtualize
Non-Production
Virtualize
3. Security
and privacy
The focus on privacy and security of the data
used for testing is increasing. Complying with
GDPR and ensuring the right data privacy
controls is a catalyst for organizations to move
away from using direct production data for
testing purposes. There is increased adoption
of masking, sub-setting, and synthetic data
generation to avoid critical data breaches when
using sensitive customer, partner, or employee
data.
Self Service
Virtualized Clone
Masking
DB
Gold Copy
Test Environment
Provisioning
Tester
Contextual Synthetic
Data
Test Data Set up
Developer
Production Clone
Files
Gold Copy (Sub-set)
Sub-setting
Data Sub-setting
Data Privacy
Architecture
Sensitive Data
Discovery
Logs
Production Sub-set
Data Generation
Gold Copy (Synthetic Data)
Release Manager
Monitoring
Differential Privacy
Data Scientist /
Analyst
Figure 3. Zero trust architecture
• To ensure security of sensitive information, organizations can create
realistic data in non-production environments without exposing
sensitive data to unauthorized users. Enterprises can leverage
contextual data masking techniques to anonymize key data elements
across an enterprise.
External Document © 2021 Infosys Limited
Are we building the right experience for the
tester? Is it easy for testers to get the data they
need for their tests?
Customers struggle to meet the agile
development and testing demands of iterative
cycles. Testers are often forced to manually
modify the production data into usable values
for their tests. Teams struggle to effectively
standardize and sub-set the production data
that has been masked and moved to test data.
4. Tester
experience
Stakeholder Experience
Not able to get the
right data for
development
We need the right test
data without sensitive
data, else we cannot
finish testing
We had to meet
critical business
requirements; we
could not provision
the right data for
Testing
Scrum Master
Developer
We are prepared to
move the IT delivery
to another consulting
firm if you cannot
handle Data Privacy
and Security
Customer
• Test data automation puts the focus on tester experience by enabling
a streamlined and consistent process with automated workflows of
self-service test data provisioning
• Test data virtualization allows applications to automatically deliver
virtual copies of production data for non-production use cases. It also
reduces the storage space required.
Tester
We want zero defects,
and no data security
breaches in Product
Development
Not able to provision
and create gold copy
for testing
IT Head
Business Sponsor
Figure 4. Stakeholder experience
Do we understand insights generated by the
data?
5. Data for AI
The probabilistic nature of AI makes it very
complex to generate test data for training AI
models.
• Adopt mechanisms for data discovery, exploration and due diligence.
Data resides in different formats across systems. Enterprises must
identify patterns across multiple systems and file formats and provide
a correct depiction of the data types, locations, and compliance
rules according to industry-specific regulations. They should also
focus on identifying patterns, defects, sub-optimal performance, and
underlying risks in the data.
• For data augmentation, analysts and data scientists can be provided
with datasets for analysis. The datasets must be resistant to
reconstruction through differential privacy for effective data privacy
protection.
External Document © 2021 Infosys Limited
Future-proof Test Data through Next-gen TDM Innovation
Every organization needs simplified testing models that can support a diverse set of data types. This has never been a higher priority.
Infosys Next-Gen TDM supports digital transformation by focusing on 9 key areas of innovation (see Figure 5). The offering leverages the
latest advances from data science in test data management, giving enterprises the right tools to engineer appropriate test data.
Explore
Discover & Plan
Enrich
Manage
1. Tester UX
4. Data
Provisioning
2. AI Driven
Data
Discovery
8. Special
Formats (EDI,
SWIFT)
5. Privacy
Preserving
Synthetic
Data
Augmentation
of Smart
Training
6.
Data Sets
3. Data
Virtualization
Training
Model Development
Testing
7. Image,
Audio
9. Intelligent Automation
Figure 5. Focus areas of Infosys Next-Gen TDM
1. Tester user experience – Testers
need to assess business and technical
requirements from the perspective of
testability as well as end users. Infosys
Next-Gen TDM provides a framework
that includes testers and gives them a
360-degree view of the TDM process.
2. AI-driven data discovery – Modern test
data resides on a tower of abstractions,
patterns, test data sources, and privacy
dependencies. One of the key features
of Infosys Next-Gen TDM is smart data
discovery of structured and unstructured
data using AI. This helps uncover:
•
Sensitive data (PII/PHI/SPI) to avoid data
privacy breaches
•
Data lineages to build the right contextual
data while maintaining referential
integrity across child and parent tables
3. Data virtualization – This is needed for
organizations to access heterogeneous
data sources. Infosys Next-Gen TDM
provides a lightweight query engine that
enables testers to mine lightweight copies
that are protected.
4. Data provisioning – There are numerous
challenges faced by testing teams in
getting access to the right data. Large
External Document © 2021 Infosys Limited
enterprises need approvals to access data
from businesses and app owners. Infosys
Next-Gen TDM provides an automated
workflow for intelligent data provisioning.
With this, testers can request data and
manage entitlements as well as approvals
through a simplified UX.
5. Privacy-preserving synthetic data – It
is important to protect personal data
residing in the data sources being
curated for test data. There is always a
risk of personal data being compromised
when there is a large amount of training
or testing data involved. It can result
in giving too much access to sensitive
information. Improper disclosure of such
data can have adverse consequences
for a data subject’s private information.
It may put data subject at more risk of
stalking and harassment. Cybercriminals
can also use data subject’s bank details
or credit card details to degrade subject’s
credit rating. Privacy-preserving synthetic
data focuses on ensuring that the data
is not compromised while maximizing
the utility of the data. Differential privacy
prevents linkage attacks, which cause
records to be re-identified even after
being anonymized for testing.
6. Smart augmentation of contextual
datasets – Dynamic data can change
its state during an application testing
process. To generate dynamic data, the
tester should be able to input the business
rules and build both positive and negative
test cases. Infosys Next-Gen TDM provides
a configurable rules engine that generates
test data dynamically and validates this
against changing business rules.
7. Image and audio file generation – Infosys
Next-Gen TDM can create audio files and
image datasets for AR/VR testing using
deep learning capabilities.
8. Special file formats – Customers need
access to special communication formats
such as JSON, XML, and SWIFT, or specific
ones such as EDI files. Infosys Next-Gen
TDM provides templates for generating
various file formats.
9. Intelligent automation – Built-in
connectors for scheduling the processes
of data discovery, protection, and data
generation allows testers to model,
design, generate, and manage their own
test datasets. These connectors include
plug-ins to the CI/CD pipeline, which
integrate data automation and test
automation.
Accelerate through Next-Gen TDM Reference Architecture
As organizations look to deliver high-quality applications at minimum cost, they need a test data management (TDM) strategy that supports
both waterfall and agile delivery models. With the rapid adoption of DevOps and increased focus on automation, there is also increasing
demand for data privacy. Enterprises are fast moving from traditional TDM to modern TDM in order to meet the needs of the current
development and testing landscape.
Infosys Next-Gen TDM focuses on increasing automation and improving the security of test data across cloud as well as on-premises data
sources.
Production
Tester
Developer
Release Manager
Data Scientist
Non-Production
on Premise
Data Source
Non-Production
on Cloud
Cloud Apps
Self Service Portal
Data base Refresh
Automated
workflow
Files
Logs
Next Gen TDM
Data Reservation
Data Generation
Data Masking
Data Discovery
Data Provisioning
Data Sub-setting
Gold Copy
Data Mining
Data Generation
Differential Privacy
Data Virtualization
Data Quality
CI/CD
Pipeline
Unit and Functional Testing
Integration, Regression and
Performance Testing
Commercial Testing
Tools
Figure 6. Infosys Next-Gen TDM reference architecture
External Document © 2021 Infosys Limited
The focus areas in digital transformation through this approach are:
1. User experience – Infosys Next-Gen
TDM focuses on building specific data
experiences for each persona, i.e., tester,
release manager, developer, and data
scientist. Its self-service capabilities
offer simplified intent-driven design
for better data provisioning and
generation.
2. Contextual test data generation –
There is a library of algorithms that
helps teams generate different data
types and formats including images,
EDI files, and other unstructured data.
3. Data protection for multiple data
sources – Infosys Next-Gen TDM
connects to multiple data sources on
cloud and on-premises. It provides a
framework of reusable components for
gold copy creation and sub-set gold
copy. Data is masked and protected
through a library of algorithms for
various data types.
4. Data augmentation – The accuracy of
AI and ML algorithms depends on the
quality of training data and the scale
of data used. The larger the volume
and more diverse the training data
used, the more accurate and robust
the model will be. Infosys Next-Gen
TDM generates high volumes of data
based on a predefined data model, data
attributes, and patterns of data variation
for training, validating, and testing AL/
ML algorithms.
5. Integration through external tools – To
enable full-fledged DevSecOps, Infosys
Next-Gen TDM has a library of adaptors
that connect to the various orchestration
tools in the automation pipeline.
Provide structured data for analytics
Structured data
Data protection
Pre-set Files
Data generation of files
Generalization
Unstructured Data
Logs and chat transcripts
Perturbing data
Images
Provide images for UX testing / AR-VR Kits
Differential
privacy & resistance
to reconstruction
Communication Format
Figure 7. Contextual test data and its different formats
External Document © 2021 Infosys Limited
XML, JSON, SWIFT
The Way Forward: Building
Evolutionary Test Data for
Your Enterprise
Production and synthetic test data can
coexist in a testing environment, either
to optimize their role in various testing
operations or as part of a transition from
one to the other. This may require the
organization to think differently about
test data and develop a roadmap for
long-term continuous testing. To solve
test data challenges, enterprises should
focus on using evolutionary architecture
to build contextual test data using a
three-pronged strategy:
•
AI-assisted data prep: Fitness
functions – Focus on identifying
the key dimensions of data that
need to be generated for testing.
Enhance feature engineering across
multi-role teams to build the key
fitness functions and models for data
generation across each data domain
and data type.
•
Focus on incremental change – Help
data architects focus on incremental
change by defining each stage
of test data management based
on the tester’s experience. This
will enable testers to selectively
pick the right data for different
deployment pipelines running on
different schedules. Partitioning
test data around operational goals
allows testers to track the health and
operational metrics of the test data.
•
Immutable test data suite – Focus
on building an immutable test data
environment with best-of-breed tools
and in-house innovation to ensure
the right tool choice for test data
generation. This helps enterprises
choose the tools best suited to their
need, thereby optimizing total cost of
ownership (TCO).
External Document © 2021 Infosys Limited
About the Authors
Avin Sharma
Consultant at Infosys Center for Emerging Technology Solutions (ICETS)
He is currently part of the product team of Infosys Enterprise Data Privacy Suite, Data for Digital ICETS. His focus includes product
management, data privacy, and pre-sales.
Ajay Kumar Kachottil
Technology Architect at Infosys with over 13 years of experience in test data management and data validation services.
He has implemented multiple test data management solutions for various global financial leaders across geographies.
Karthik Nagarajan
Industry Principal Consultant at Infosys Center for Emerging Technology Solutions (ICETS).
He has more than 15 years of experience in customer experience solution architecture, product development, and business development.
He currently works with the product team of Infosys Enterprise Data Privacy Suite, Data for Digital ICETS, on data privacy, data
augmentation, and CX strategy.
For more information, contact askus@infosys.com
© 2021 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys
acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this
documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the
prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document.
Infosys.com | NYSE: INFY
Stay Connected
|
3
|
whitepapppr
|
quantifying-customer-experience
|
WHITE PAPER
QUANTIFYING CUSTOMER
EXPERIENCE FOR QUALITY
ASSURANCE IN THE DIGITAL ERA
Abstract
Post the pandemic, the new normal situation demands an increased
digitalization across all industry sectors. Ensuring top class customer
experience became crucial for all digital customer interactions
through multiple channels like web, mobile, chatbot, etc. Customer
experience is an area in which neither the aesthetics nor the content
can be compromised as that will lead to severe negative business
impact. This paper explains various automation strategies that
can enable QA teams to provide a unified experience to the end
customers across multiple channels. The focus is to identify the key
attributes of customer experience and suggest metrics that can be
used to measure its effectiveness.
Introduction
Customer experience has always been a dynamic topic as it is becoming more personalized day by day and varies according to individual
preferences. It is hard to measure customer experience which make the work even more difficult for Quality Assurance teams. The factors
which amplify the customer experience not only include the functional and visual factors like front end aesthetics, user interface, user
experience, etc., but also include non-functional and social aspects like omnichannel engagements, social media presence, customer
sentiments, accessibility, security, performance, etc.
Why do we need to measure the Customer Experience?
Enterprises encounter various challenges
in providing a unified experience to their
end customers across multiple channels
such as:
• Lack of information or mismatch in
information
• Quality of content is not up to the
standard
• Lack of usability in cross navigation
to make it intuitive and self-guided
• Performance issues across local and
global regions
• Consistent look and feel and
functional flow across various
channels
• Violation of security guidelines
• Improper content placement
• Nonconformance to Accessibility as per
the Web Content Accessibility Guidelines
(WCAG) guidelines
• Inappropriate format and alignment
• Lack of social media integration
Quality Assurance is required in all these areas of functional, nonfunctional, and social aspects of Customer Experience. Since, Customer
Experience is hyper personalized in the digital era, a persona-based experience measurement is required. Conventional Quality Assurance
practices need to be changed to evaluate all aspects of customers journey across multiple channels, comprehensively.
Traditional Testing fails to
adapt to real time learning,
lacks feedback loop
Lack of single view of
factors affecting customer
experience.
Lack of persona based
test strategy
Vast sea of social
messages and user
feedback data from
social media platforms
Adapting experience
unique to each customer
Testing based on biz/
technical requirements
resulting in gaps in
customer’s expectations
Testing is inward focused
rather than customer
focused
Quantifiable CX
measurements not
available
Figure 1 Challenges in Quality Assurance of Customer Experience
External Document © 2022 Infosys Limited
Experience Validation Needs to Cover Multiple Areas of a Customer Journey
While organizations try to focus on enhancing the customer experience, there are various areas need to be validated and remediated
independently for functional, nonfunctional, and social aspects. The current testing trend covers the basic functional and statistical aspects,
emerging testing areas will cover behavioral aspects and focus more on providing customer centric approach like using AI for enhancing
the quality of digital impression with personalized customizations. Below table provides information on areas where quality assurance is
required along with the popular tools for automation.
Sr No
Area
Key Aspects / Metrics
Current Testing Trend
Emerging Testing Trend
Tools
1
Visual
Webpage content alignment,
Conformance font size, font color, web links,
images, audio files, video
files, forms, tabular content,
color scheme, font scheme,
navigation buttons, theme
etc.
A/B testing, Style guide
check, Font check, Color
check, Usability testing,
Readability testing
Persona based testing
Siteimprove
Applitools, SortSite
2
Content
Checking whether the image,
video, audio, text, tables,
forms, links etc. are up to the
standards.
A/B Testing, Voice quality
testing, Streaming media
testing, Compatibility
testing, Internationalization/
Localization testing
Personalized UX Testing,
CSS3 Animation testing,
2D Illustrations, AI
powered translators
Siteimprove, SortSite
3
Performance
of webpage
Loading speed, TimetoTitle,
DNS lookup speed, Requests
per second, Conversion
rate, TimetoFirstByte,
TimetoInteract, Error Rate
Performance testing,
Network testing, cross
browser testing, multiple
device testing, multiple OS
testing
Performance Engineering,
AI in performance testing,
Chaos Engineering
GTMetrix, Pingdom
Tool, Google
Lighthouse, Web
Page Test, etc.
4
Security
Application security testing,
Conformance with security
standards across geographies. Cyber Assurance, Biometric
testing, Payment Testing
Secured transactions, cyber
security, biometric security,
user account security
Blockchain testing, Brain
Computer Interface BCI
testing, Penetration
testing, Facial recognition
Sucuri SiteCheck,
Mozilla Observatory,
Acunetix, Wapiti
5
Usability
Navigation on website,
visibility, readability, chatbot
integrations, user interface
Usability testing, Readability AI led design testing,
testing, Eye tracking, Screen Emotion tracking,
Movement tracking
reader validation, Chatbot
testing
6
Web
Accessibility
Conformance to web
accessibility guidelines as per
geography
Checking conformance to
guidelines [Web Content
Accessibility Guidelines
(WCAG), Disability
Discrimination Act (DDA)
etc.)
Persona based accessibility Level Access, AXE,
testing
Siteimprove, SortSite.
7
Customer
Analytics
Net Promoter Score, Customer
Effort Score, Customer
Satisfaction, Customer
Lifetime Value, Customer Turn
Rate, Average Resolution
Time, Conversion Rate,
Percentage of new sessions,
Pages per session
Sentiment Analytics, Crowd
testing, Real time analytics,
social media analytics, IOT
testing
AR/ VR testing, Immersive
testing
Sprout Social, Buffer,
Google Analytics,
Hootsuite.
8
Social Media
Integration
Clickthrough rate, measuring Measuring social media
engagement, influence, brand engagement, social media
analytics
awareness
AR/VR testing, Advertising
Playbook, Streaming Data
Validation
Sprout Social, Buffer,
Google Analytics, etc.
Hotjar, Google
Anaytics, Delighted,
SurveyMonkey,
UserZoom
Table 1 Holistic Customer Experience Validation and Trends
External Document © 2022 Infosys Limited
Emerging Trends in Customer Experience Validation
Below are few of the emerging trends that can help enhance the customer experience. QA team can use quantifiable attributes to
understand where exactly their focus is required.
Telemetry Analysis using AI/ML in
Customer Experience
Telemetry data collected from various
sources can be utilized for analyzing the
customer experience and implementing
the appropriate corrective action. These
sources could be the social media feeds,
various testing tools mentioned in Table 1,
web pages, etc. Analytics is normally done
through custom built accelerators using
AI/ML techniques. Some of the common
analytics are listed below:
• Sentiment Analytics: Sentiment of
the message is analyzed as positive,
negative, or neutral
• Intent Analytics: Identifies intent as
marketing, query, opinion etc.
• Contextual Semantic Search (CSS):
Intelligent Smart Search Algorithm which
filters the messages into given concept.
Unlike the keyword-based search, here the
search is done on a dump of social media
messages for a concept (e.g Price, Quality,
etc.) using AI techniques.
• Multilingual Sentiment Analytics:
Analyze sentiment based on languages
• Text Analytics, Text Cleansing,
Clustering: Extracting meaning out of the
text by language identification, sentence
breaking, sentence clustering etc.
• Response Tag Analysis: To filter pricing,
performance, support issues
• Named entity recognition (NER): To
identify who is saying what on social
media posts and classify
• Feature Extraction from Text: Transform
text using bag of words and bag-of-ngrams
• Classification Algorithms: Classification
algorithms assign the tags and create
categories according to the content.
It has broad applications such as
sentiment analysis, topic labeling, spam
detection, and intent detection.
External Document © 2022 Infosys Limited
• Image analytics: - Identifying the
context of the image using image
analytics, categorizes the image and
sort them according to gender, age,
facial expression, objects, actions,
scenes, topic, and sentiment.
Computer Vision
Computer Vision helps to derive
meaningful information from images,
objects, and videos. With hyper
personalization of customer experience,
we need an intelligent and integrated
customer experience which can be
personalized by the people. While AI
plays an important role in analyzing the
data and recommend the corrective
actions, Computer Vision helps to capture
the objects, face expressions, etc. and
the image processing technology can
be leveraged to interpret the customer
response.
Chatbot
A chatbot is an artificial intelligence
software that can simulate a
conversation (or chat) with a user.
Chatbot has become a very important
mode of communication and most of
the enterprises use chatbots for their
customer interactions, especially in the
new normal scenario.
Some of the metrics to measure
customer experience using a
chatbot are:
1. Customer Satisfaction: This metrics
will determine the efficiency and
effectiveness of chatbot. Questions
which can be included in this can be:
• Whether chatbot was able to
understand the query of the
customer?
• Was the response provided to the
specific query?
• Whether the query was
transferred to the specific agent
in case on non-resolution of the
query
2. Activity Volume: How frequently
is the chatbot used? Is the usage of
chatbot increasing or decreasing?
3. Completion Rates: This metric
measures the amount of time the
customer took. Also, the levels of
question asked by the customer.
It will measure the instance
when the customer opted to get
resolution from an agent and left
the chatbot. This will help identify
the opportunities to improve the
chatbot further, improving the
comprehension, scripts and adding
other functionalities to the chatbot.
4. Reuse Rates: This metric will
provide the insight on the reuse of
chatbot by the same customer. This
will also enable to dive deep into
the results of customer satisfaction
metric, help us understand new
user v/s old user usage ratio and
allow us to conclude on re-usability
and adaptability of chatbot by
customers.
5. Speech Analytics Feedback: In this
speech analytics can be used to
examine customer interactions with
service agents. Some of the specific
elements to be noted include
tone of the call, frustration level
of customer, knowledge level of
customer, ease of use etc.
Measuring Tools
Even though there are various
tools available from startups like
BotAnalytics, BotCore, CharBase,
Dashbot, etc., most of the QA teams are
measuring the Chatbot performance
parameters through AI/ ML utilities.
Alternative Reality
Alternative Reality includes augmented
reality (AR), virtual reality (VR) and mixed
reality. AR is in many ways adding value to
the customer experience of an enterprise
by providing an interactive environment
and helps them to stay ahead of their
competitors. The data points used to
measure it overlap with those of website
and app metrics, with addition of a few
new points to be measured.
Metrics to measure customer
experience in BCI:
1. Speed - Speed of the user’s reaction.
Higher the speed, more is the user
interest on digital print.
2. Intensity - Intensity of user’s reaction
towards a digital presence will help
understanding the likes and dislikes of user.
3. Reaction - This will help understand
the different reactions on digital
interaction.
Measuring Tools
Open-source tools like OpenEXP,
Psychtoolbox, etc. can be leveraged
to build custom built utilities for
measurement of the above metrics
Some of the additional metrics to
measure customer experience in
Alternate Reality:
1. Dwell time: Total time spent on the
platform. More time spent on platform
being the positive outcome
2. Engagement: Interaction with the
platform. More the engagement better
is the outcome.
3. Recall: Ability to remember. Higher
recall rate indicates proper attention
and guides us on the effectiveness of
the platform
4. Sentiment: Reaction. Positive,
Negative and Neutral. This will assist in
understanding the sentiment.
5. Hardware used: Desktop, laptop, tablet,
mobile etc.
Measuring Tools
There is not much automation done in
AR/ VR experience validation. Custom
built utilities using Unity framework
can be explored to measure the AR/ VR
experience.
Brain computer interface
A brain computer interface (BCI) is a system
that measures activity of the central nervous
system (CNS) and converts it into artificial
output that replaces, restores, enhances,
supplements, or improves natural CNS
output, and thereby changes the ongoing
interactions between the CNS and its
external or internal environment. BCI will
help in personalizing the user experience by
understanding the brain signals from a user.
External Document © 2022 Infosys Limited
Automation in Customer Experience Assurance
With multiple channels to interact with
the end customers, companies really
looking at ensuring the digital quality
assurance in a faster and in a continuous
way. To reduce time to market, customer
experience assurance should be automated
with more and more infusion of AI and
ML. Further, quality assurance should
be in an end-to-end manner, where the
experience assurance should be an
ogoing process which goes beyond the
conventional QA phase
• On demand service availability
Some of the technical challenges in
automation are:
• Automating the remediation and
Continuous Integration
• Services offered by company should have
a seamless experience with all distribution
channels (Web, mobile, Doc, etc.).
• Early assurance during development
before the application is passed to QA.
• Ensure regulatory compliance
With the adoption of DevSecOps, customer
• Collaboration environment for
Platform
component
User touch
points
developer can ensure the quality even
IDE plugins for shift left
remediation
Cognitive analysis
• Scoring mechanism to benchmark
• Integration with Test and Development
tools
The above challenges will call for a fully
automated customer experience platform
as depicted below:
Intelligent application
crawler
APIs and CI/CD plugins
Cloud Environments with
multi browser & device
Dashboards & Reports
Tool adapters
Scheduler
External IPs
Accelerators
Accelerators/ tools
• Actionable insights
Online Experience Audit
Services
Subscription &
Administration
Accessibility Analyzer
Usability Analyzer
developers, testers, and auditors with
proper governance
Google APIs
Applitools
Manual
PCloudy
Sentimental Analytics
Visual Consistency checker
Others
ALM
JiRA
Assistive
technologies
Figure 2 Automation approach for evaluating holistic customer experience
An automation approach should be
comprehensive enough to provide a
collaboration environment between
testers, developers, auditors, and the
customers. It needs accelerators or
external tools to measure and analyze
External Document © 2022 Infosys Limited
various aspects of customer experience.
Cognitive analysis to ensure continuous
improvement in customer experience is
a key success factor for every enterprise.
As shown in the picture, complete
automation can never be achieved as
some assistive or manual verification is
required. For example, JAWS screen reader
to test the text to speech output. Also, the
platform needs to have the integration
capabilities with external tools for end-toend test automation.
Conclusion
As the digital world is moving towards
personalization, QA teams should
work on data analytics and focus on
analyzing user behavior and activities,
leveraging various available testing
tools. They should also focus on
adapting new and emerging testing
areas like AI based testing, Persona
based testing, Immersive testing, 2D
illustration testing etc. These new
testing areas can help in identifying
the issues faced in providing the best
customer experience, quantify the
customer experience and can help in
improving it.
Since there is considerable amount
of time, money and effort are put
into QA., for ensuring good ROI, QA
team should start taking customer
experience as a personality-based
experience and work upon all major
aspects mentioned above. QA teams
should look beyond the normal
hygiene followed for digital platforms,
dig deeper and adapt a customer
centric approach in order to make
digital prints suitable to the user in all
the aspects.
External Document © 2022 Infosys Limited
References
1. Customer Experience Validation - Offerings | Infosys
2. https://www.gartner.com/imagesrv/summits/docs/na/customer-360/C360_2011_brochure_FINAL.pdf
3. The Future of CX 2022, a trends report by Freshworks
About the Author
Saji V.S
Principal Technology Architect
For more information, contact askus@infosys.com
© 2022 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys
acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this
documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the
prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document.
Infosys.com | NYSE: INFY
Stay Connected
|
4
|
whitepapppr
|
dast-automation-secure
|
WHITE PAPER
DAST AUTOMATION FOR SECURE,
SWIFT DEVSECOPS CLOUD RELEASES
Abstract
DevSecOps adoption in the cloud goes well beyond merely managing continuous
integration and continuous deployment (CI/CD) cycles. Its primary focus is security
automation. This white paper examines the barriers organizations face when
they begin their DevSecOps journey, and beyond. It highlights one of the crucial
stages of security testing known as Dynamic Application Security Testing (DAST). It
explores the challenges and advantages of effectively integrating DAST into the CI/
CD pipeline, on-premises and in the cloud. The paper delineates the best practices
for DAST tool selection and chain set-up, which assist in shift-left testing and cloud
security workflows that offer efficient security validation of deployments with riskbased prompt responses.
Background
Traditional security practices involve security personnel running
tests, reviewing findings, and providing developers with
recommendations for modifications. This process, including
threat modeling, conducting compliance checks, and carrying out
architectural risk analysis and management, is time-consuming and
incongruous with the speed of DevOps. Some of these practices
are challenging to automate, leading to a security and DevOps
imbalance. To overcome these challenges, many organizations have
shifted to an agile DevOps delivery model. However, this exerts
significant pressure on DevOps to achieve speed with security as
part of the CI/CD pipeline. As a result, release timelines and quality
have been impacted due to the absence of important security
checks or the deployment of vulnerable code under time pressure.
Even as DevOps was evolving, the industry concurrently fasttracked its cloud transformation roadmap. Most organizations
shifted their focus to delivering highly scalable applications built
on customized modern architectures with 24/7 digital services.
These applications include a wide-ranging stack of advanced tiers,
technologies, and microservices, backed by leading cloud platforms
such as AWS, GCP, and Azure.
Despite the accelerated digital transformations, a large number of
External Document © 2023 Infosys Limited
organizations continue to harbor concerns about security. The yearend cybercrime statistics provide good reason to do so:
1. The global average cost of a data breach is an estimated US
$4.35 million, as per IBM’s 2022 data breach report1
2. Cybercrime cost the world US $7 trillion in 2022 and is set to
reach US $10.5 trillion by 2025, according to Cybersecurity
Ventures2
Evidently, security is an important consideration in cloud migration
planning. Speed and agility are imperatives while introducing
security to DevOps processes. Integrating automated security
checks directly into the CI/CD pipeline enables DevOps to evolve
into DevSecOps.
DevSecOps is a flexible collaboration between development,
security, and IT operations. It integrates security principles and
practices into the DevOps life cycle to accelerate application
releases securely and confidently. Moreover, it adds value to
business by reducing cost, improving the scope for innovation,
speeding recovery, and implementing security by design. Studies
project DevSecOps to reach a market size of between US $20 billion
to US $40 billion by the end of 2030.
DevSecOps implementation challenges
As enterprises race to get on the DevSecOps bandwagon, IT teams
continue to experience issues:
• Want of collaboration and cohesive skillful teams with
development, operations, and security experts
• 60% find DevSecOps technically challenging 3
Process challenges:
• 38% report a lack of education and adequate skills around
DevSecOps 3
• Security and compliance remain postscript
• Inability to fully automate traditional manual security practices to
integrate into DevSecOps
• 94% of security and 93% of development teams report an impact
from talent shortage 1
• Continuous security assessments without manual intervention
Some of the typical challenges that IT teams face when integrating
security into DevOps on-premise or in the cloud are:
People/culture challenges:
Tools/technology challenges:
• Tool selection, complexity, and integration problems
• Configuration management issues
• Lack of awareness among developers on secure coding practices
and processes
• Prolonged code scanning and consumption of resources
Solution
Focusing on each phase of the modern software development life cycle (SDLC) can help strategically resolve DevSecOps implementation
challenges arising from people, processes, and technology. Integrating different types of security testing for each stage can help overcome
the issues more effectively (Figure 1).
PLAN
Requirements
CODE
Code Repository
BUILD
CI Server
Threat
Modelling
Software
Composition
Analysis and Secret
Management
Secure Code
Analysis
and Docker
Linting
TEST
Integration
Testing
RELEASE
Artifact
Repository
Dynamic
Application
Security Testing
Network
Vulnerability
Assessments
DEPLOY
CD Orchestration
OPERATE
Monitor
System/Cloud
Hardening
Cloud
Configuration
Reviews
Figure 1: Modern SDLC with DevSecOps and Types of Security Testing
External Document © 2023 Infosys Limited
What is DAST?
DAST is the technique of identifying the vulnerabilities and touchpoints of an application while it is running. DAST is easy even for beginners
to get started on without in-depth coding experience. However, DAST requires a subject matter expert (SME) in the area of security to
configure and set up the tool. An SME with good spidering techniques can build rules and configure the correct filters to ensure better
coverage, improve the effectiveness of the DAST scan, and reduce false positives.
Best practices to integrate DAST with CI/CD
The last few years have shown that next-generation CX requires heavy doses of perseverance and attitudinal focus. At Infosys, we have
extended this to the way we deliver projects by relying on a few key cultural principles:
• Integrate DAST scan in the CI/CD production pipeline after
provisioning the essential compute resources, knowing that
the scan will take under 15 minutes to complete. If not, create a
separate pipeline in a non-production environment
• Create separate jobs for each test in the case of large
applications. E.g., SQL injection and XSS, among others
• Consider onboarding an SME with expertise in spidering
techniques, as the value created through scans is directly
proportional to the skills exhibited
• Roll out security tools in phases based on usage, from elementary
to advanced
• Fail builds that report critical or high-severity issues
• Save time building test scripts from scratch by leveraging existing
scripts from the functional automation team
• Provide links to knowledge pages in the scan outputs for
additional assistance
• Pick tools that provide APIs
• Keep the framework simple and modular
• Control the scope and false positives locally instead of
maintaining a central database
• Adopt the everything-as-a-code strategy as it is easy to maintain
Besides adopting best practices, the CI/CD environment needs to be test-ready. A basic test set-up includes:
Developer machine for
Code repository for version
testing locally
controlling
CI/CD server for integrations
and running tests with the
help of slave/runner
Staging
environment
There can be several alternatives to the set-up based on the toolset selection. The following diagram depicts a sample (see Figure 2).
Figure 2: DevSecOps Lab Set-up
External Document © 2023 Infosys Limited
Right tool selection
With its heavy reliance on tools, DevSecOps enables the
Best practices in tool implementation
•
Create an enhanced set of customized rules for tools to ensure
optimum scans, and reliable outcomes
automation of engineering processes, such as making security
testing repeatable, increasing testing speed, and providing early
•
Plan incremental scans to reduce the overall time taken
qualitative feedback on application security. Therefore, selecting
•
Use artificial intelligence (AI) capabilities to optimize the analysis
of vulnerabilities reported by tools
the appropriate security testing tools for specific types of security
testing and applying the correct configuration in the CI/CD pipeline
is critical.
•
Aim for zero-touch automation
•
Consider built-in quality through automated gating of the build
Challenges in tool selection and best practices
Common pitfalls
• Lack of standards in tool selection
• Security issues from tool complexity and integration
against the desired security standards
After selecting the CI/CD and DAST tools, the next step is to set
up a pre-production or staging environment and deploy the web
application. The set-up enables DAST to run in the CI/CD pipeline as
• Inadequate training, skills, and documentation
a part of integration testing. Let us consider an example using the
• Configuration challenges
widely available open-source DAST tool, Zed Attack Proxy (ZAP).
Best practices in tool selection
Some of the key considerations for integrating DAST in the CI/CD
• Expert coverage of tool standards
pipeline using ZAP (see Figure 3) are listed below:
• Essential documentation and security support
•
• Potential for optimal tool performance, including language
coverage, open source or commercial options, the ability to
ignore issues, incident severity categories, failure on issues, and
results reporting feature
• Cloud technology support
CI/CD server and the Gitlab CI/CD
•
• Continuous vulnerability assessment capability
Set up the CI/CD server and Gitlab. Ensure ZAP container
readiness with Selenium on Firefox, along with custom scripts
•
Reuse the functional automation scripts, only modifying them
for security testing use cases and data requirements
• Availability of customization and integration capabilities with
other tools in the toolchain
Test on the developer machine before moving the code to the
•
Push all the custom scripts to the Git server and pull the latest
code. Run the pipeline after meeting all prerequisites
External Document © 2023 Infosys Limited
Some of the key considerations for integrating DAST in the CI/CD pipeline using ZAP (see Figure 3) are listed below:
•
Test on the developer machine before moving the code to the CI/CD server and the Gitlab CI/CD
•
Set up the CI/CD server and Gitlab. Ensure ZAP container readiness with Selenium on Firefox, along with custom scripts
•
Reuse the functional automation scripts, only modifying them for security testing use cases and data requirements
•
Push all the custom scripts to the Git server and pull the latest code. Run the pipeline after meeting all prerequisites
External Document © 2023 Infosys Limited
DevSecOps with DAST in the cloud
Integrating DAST with cloud CI/CD requires a different approach.
Approach:
•
Identify, leverage, and integrate cloud-native CI/CD services, continuous logging and monitoring services, auditing, and governance
services, as well as operation services with regular CI/CD tools – mainly DAST
•
Control all CI/CD jobs with server and slave architecture by using containers, such as Docker, to build and deploy applications as cloud
orchestration tools.
An effective DAST DevSecOps in cloud architecture appears as shown in Figure 4:
Figure 4: DAST DevSecOps in Cloud Workflow
Key steps
1. The user commits the code to a code repository
2. The tool builds artifacts and uploads them to the artifact library
3. Integrated tools help perform the SCA and SAST tests
4. Reports of critical/high-failure vulnerabilities from the SCA and
SAST scans go to the security dashboard for fixing
5. Code deployment to the staging environment takes place if
6. Successful deployment triggers a DAST tool, such as the OWASP
ZAP, for scanning
7. User repeats steps 4 to 6 in the event of a vulnerability
detection
8. If no vulnerabilities are reported, the workflow triggers an
approval email.
9. Receipt of approval schedules automatic deployment to
production
reports indicate “no or ignore vulnerabilities”
Best practices
•
Control access to pipeline resources
using identity and access management
(IAM) roles and security policies
•
Encrypt data at transit and rest always
•
Store sensitive information, such as
API tokens and passwords, in the
Secrets Manager
External Document © 2023 Infosys Limited
Conclusion
DevOps is becoming a reality much faster than we anticipate. However, there should
be no compromise on security testing to avoid delayed deployments and the risk
of releasing software with security vulnerabilities. Successful DevSecOps requires
integrating security at every stage of DevOps, enabling DevOps teams on security
characteristics, enhancing the partnership between DevOps teams and security
SMEs, automating security testing to the extent possible, and shift-left security
for early feedback. By leveraging the best practices recommended in this paper,
organizations can achieve a more secure and faster release by as much as 15%, both
on-premises and in the cloud.
About the authors
Kedar J Mankar
Amlan Sahoo
Vamsi Kishore
Kedar J Mankar is an Infosys global delivery
lead for Cyber Security testing with Infosys.
He has extensive experience across
different software testing types. He has
led large size delivery and transformation
programs for global Fortune 500 customers
and delivered value through different COEs
with innovation at core. He has experience
working and handling teams in functional,
data, automation, DevOps, performance
and security testing across multiple
geographies and verticals.
Amlan Sahoo has an overall 27+ years in IT
industry in application development and
testing. He is currently the head of Cyber
Security testing division. He has a proven
track record in managing and leading
transformation programs with large
teams for Fortune 50 clients, managing
deliveries across multiple geographies and
verticals. He also has 4 IEEE and 1 IASTED
publications to his credit on bringing
efficiencies in heterogeneous software
architectures.
Vamsi Kishore Sukla is a Security
consultant with over 8 years of professional
experience in the security field, specializing
in application security testing, cloud
security testing, network vulnerability
assessments following OWASP standards
and CIS benchmarks. With a deep
understanding of the latest security trends
and tools, he provides comprehensive
security solutions to ensure the safety and
integrity of organization and clients.
References
1. https://www.cobalt.io/blog/cybersecurity-statistics-2023
2. https://cybersecurityventures.com/boardroom-cybersecurity-report/
3. https://strongdm.com/blog/devsecops-statistics
For more information, contact askus@infosys.com
© 2023 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys
acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this
documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the
prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document.
Infosys.com | NYSE: INFY
Stay Connected
|
5
|
whitepapppr
|
smarter-way-build-system-resilience
|
WHITE PAPER
ACHIEVING ORDER THROUGH CHAOS
ENGINEERING: A SMARTER WAY TO
BUILD SYSTEM RESILIENCE
Abstract
Digital infrastructure has grown increasingly complex owing to distributed
cloud architectures and microservices. More than ever before, it is
increasingly challenging for organizations to predict potential failures and
system vulnerabilities. This is a critical capability needed to avoid expensive
outages and reputational damage.
This paper examines how chaos engineering helps organizations boost
their digital immunity. As a leading quality engineering approach, chaos
engineering provides a systematic, analytics-based, test-first, and wellexecuted path to ensuring system reliability and resilience in today’s
disruptive digital era.
Introduction
Digital systems have become increasingly complex and
interdependent, leading to greater vulnerabilities across
distributed networks. There have been several instances where
a sudden increase in online traffic or unforeseen cyberattacks
have caused service failures, adversely impacting organizational
reputation, brand integrity, and customer confidence. Such
outages have a costly domino effect, resulting in revenue losses or,
in some cases, regulatory action against the organization.
Thus, enterprises must implement robust and resilient quality
engineering solutions that safeguard them from potential
threats and help overcome these challenges. This is where ‘chaos
Chaos Engineering – A Boost to Digital
Immunity
System resilience is about how promptly a system can recover
from disruption. Chaos engineering is an experimentative
process that deliberately disrupts the system to identify weak
spots, anticipate failures, predict user experience, and rectify the
architecture. It helps engineering teams redesign and restore
the organization’s infrastructure and make it more resilient
in the face of any crisis. Thus, it builds confidence in system
resiliency by running failure experiments to generate random and
unpredictable behaviour.
Despite its name, chaos engineering is far from chaotic. It is a
systematic, data-driven technique of conducting experiments
that use chaotic behaviour to stress systems, identify flaws, and
demonstrate resilience. System complexity and rising consumer
expectations are two of the biggest forces behind chaos
engineering. As systems becoming increasingly feature-rich,
changes in system performance affect system predictability and
service outcomes, which in turn, impact business success.
External Document © 2023 Infosys Limited
engineering’ comes in.
Chaos engineering is a preventive measure that tests failure
scenarios before they have a chance to grow and cause
downtime in live environments. It identifies and fixes issues
immediately by recognizing system weaknesses and how
systems behave during an injected failure. Through chaos
engineering, organizations can establish mitigation steps to
safeguard end users from negative impact and build confidence
in the system capacity to withstand highly variable and
destructive conditions.
How Chaos Engineering is Different from
Traditional Testing Practices
• Performance testing – It baselines application performance
under a defined load in favorable environmental conditions.
The main objective is to check how the system performs when
the application is up and running without any severe functional
defects in an environment comparable to the production
environment. The potential disruptors uncovered during the
performance tests are due to certain load conditions on the
application.
• Disaster recovery testing – This process ensures that an
organization can restore its data and applications to continue
operations even after critical IT failure or complete service
disruption.
• Chaos testing – During the chaos test, the application
under normal load is subjected to known failures outside the
prescribed boundaries with minimum blast radius to check if the
system behaves as expected. Any deviation from expectations
is noted as an observation and mitigation steps are prepared to
rectify the deviation.
Quality assurance engineers find chaos testing to be more effective
than performance and disaster recovery testing in unearthing
latent bugs and identifying unanticipated system weaknesses.
External Document © 2023 Infosys Limited
5-step Chaos Engineering Framework
Much like a controlled injection, implementing chaos engineering
calls for a systematic approach. The five-step framework described
below, when ‘injected’ into an organization, can handle defects
and fight system vulnerabilities.
3. Run chaos tests
Chaos engineering gives organizations a safety net by introducing
failures in the pre-production environment, thereby promoting
organizational learning, increasing reliability, and improving
understanding of complex system dependencies.
4. Analyze the results
1. Prepare the process
Understand the end-to-end application architecture. Inform
stakeholders and get their approval to implement chaos
engineering. Finalize the hypothesis based on system
understanding.
2. Set up tools
Set up and enable chaos test tools on servers to run chaos
experiments. Enable system monitoring and alerting tools. Use
performance test tools to generate a steady load on the system
under attack. Additionally, a Jenkins CI/CD pipeline can be set
up to automate chaos tests.
External Document © 2023 Infosys Limited
Orchestrate different kinds of attacks on the system to cause
failures. Ensure proper alerts are generated for the failures and
sent to the right teams to take relevant actions.
Analyze the test results and compare these with the
expectations set when designing the hypothesis. Communicate
the findings to the relevant stakeholders to make system
improvements.
5. Run regression tests
Repeat the tests once the issues are fixed and increase the blast
radius to uncover further failures.
This step-by-step approach executes an attack plan within the
test environment and applies the lessons/feedback from the
outcomes, thereby improving the quality of production systems
and delivering tangible value to enterprises.
Examples of Chaos Engineering Experiments
A chaos engineering experiment or a chaos engineering attack is
the process of inducing attacks on a system under an expected
load. An attack involves injecting failures into a system in a simple,
safe, and secure way.
There are various types of attacks that can be run against
infrastructure. This includes anything that impacts system
resources, delays or drops network traffic, shuts down hosts, and
more. A typical web application architecture can have four types
of attacks run on it to assess application behavior:
• Resource attacks – Resource attacks reveal how an application
service degrades when starved of resources like CPU, memory,
I/O, or disk space
• State attacks – State attacks introduce chaos into the
infrastructure to check whether the application service fails or
whether it handles it and how
• Network attacks – Network attacks demonstrate the impact of
lost or delayed traffic on the application. It is done to test how
services behave when they are unable to reach any one of the
dependencies, whether internal or external
• Application attacks – Application attacks introduce sudden
user traffic on the application or on a particular function. It is
done to test how services behave when there is sudden rise in
the user traffic due to high demand.
Chaos engineering experiments on a typical web application
Database
storage
FMEA* analysis
Sample web application
Back-End business servers
Web
servers
Back-End
servers
App
server
Load
balancer
Front and backoffice clients
Task
server
Queue storage
•
Component level faults
•
On-premises
NETWORK ATTACK
RESOURCE ATTACK
•
Cloud deployment
•
Existing production issues
•
Container, pod, cluster
•
•
•
•
•
•
•
•
Latency attack
Blackhole attack
Packet loss attack
Failed DNS
Throttle CPU
Memory attack
Disk attack
I/O attack
•
•
•
STATE ATTACK
APPLICATION ATTACK
Shutdown attack
Process killer
attack
Time travel attack
•
•
Spike attack
Function-based
runtime injection
* FMEA – Failure mode and effect analysis
Figure 1 – Chaos engineering experiments on a typical web application
External Document © 2023 Infosys Limited
GameDay Concept
impact. All technical outcomes are discussed.
GameDay is an advanced concept of chaos engineering. It is
organized by the chaos test team to practice chaos experiments,
test incident response process, validate past outages, and find
unknown issues in services.
The team includes a ‘General’ who is responsible for conducting
the GameDay, a ‘Commander’ who coordinates with all the
participants, ‘Observers’ who monitor the GameDay tests and
validate the deviations (if any), and a ‘Scribe’ who notes down the
key observations.
In GameDay, a mock war room is set up and the calendar of all
stakeholders is blocked for up to 2-4 hours. One or more chaos
experiments are run on the system or service to observe the
GameDay simulation approach
Pre-requisites
Approach
Outcomes derived
Monitoring
setup and
observability
Environment
setup and
availability
Incident
management
support
Draft
pick
Boot
camp
Practice
games
Block the war
room
Conduct a
whiteboarding
session
Design the
experiment
Invite
stakeholders
for critical
application
components
Debate
assumptions
Finalize the
Preseason
games
Execute and
run the
experiments
Determine
the blast
radius
Analyze
and
feedback
loop
Repeat the
execution
until the blast
radius is
found
Validation of
recovery from
known incidents and
failure points
Analysis of impact
due to various faults
simulated through
GameDay
Observability for
future incidents and
planning for
additional scenarios
hypothesis
GameDay simulation is a new-age technique to experiment with failures in a complex distributed
system architecture
Figure 2 – GameDay simulation approach
External Document © 2023 Infosys Limited
Benefits of Chaos Engineering
To ignore chaos engineering is to embrace crisis engineering.
Proactive QE teams have made chaos engineering a part of their
regular operations by exposing their staff to chaos tests and
collaboratively experimenting with other business units to refine
testing and improve enterprise systems.
Chaos engineering delivers several benefits such as:
• Reduced detection time – Early identification of issues caused
due to failures occurring in live environments, making it easy to
proactively identify which component may cause issues
• Knowing the path to recovery – Chaos engineering helps
predict system behavior in case of failure events and thus works
towards protecting the system to avoid major outages
• Being prepared for the unexpected – It helps chart mitigation
steps by experimenting with known system failures in a
controlled environment
• Highly-available systems – Enables setting alerts and
automating mitigation actions when known failures occur in a
live environment, thereby reducing system downtime
• Improved customer satisfaction – Helps avoid service
disruptions by detecting and preventing component outages,
thereby enhancing user experience, increasing customer
retention, and improving customer acquisition
Chaos engineering brings about cultural changes and maturity
in the way an enterprise designs and develops its applications.
However, its success calls for strong commitment from all levels
across the organization.
External Document © 2023 Infosys Limited
Conclusion
System failures can prove very costly for enterprises, making it critical for organizations to focus on quality engineering practices. Chaos
engineering is one such practice that boosts resilience, flexibility, and velocity while ensuring the smooth functioning of distributed
enterprise systems. It allows organizations to introduce attacks that identify system weaknesses so they can rectify issues proactively. By
identifying and fixing failure points early in the lifecycle, organizations can be prepared for the unexpected, recover faster from disruptions,
increase efficiency, and reduce cost. Ultimately, it culminates in better business outcomes and customer experience.
About the Authors
Harleen Bedi
Senior Industry Principal
Harleen is a Senior IT Consultant with Infosys. She focuses on developing and promoting IT offerings for quality
engineering based on emerging technologies such as AI, cloud, big data, etc. Harleen builds, articulates, and deploys QE
strategies and innovations for enterprises, helping clients meet their business objectives.
Jack Hinduja
Lead Consultant
Jack Hinduja is a Lead Consultant at Infosys with over 15 years of experience in the telecom and banking sectors.
He has led quality assurance and validation projects for enterprises across the globe. Jack is responsible for driving
transformation in digital quality assurance and implementing performance and chaos engineering practices in various
enterprises.
For more information, contact askus@infosys.com
© 2023 Infosys Limited, Bengaluru, India. All Rights Reserved. Infosys believes the information in this document is accurate as of its publication date; such information is subject to change without notice. Infosys
acknowledges the proprietary rights of other companies to the trademarks, product names and such other intellectual property rights mentioned in this document. Except as expressly permitted, neither this
documentation nor any part of it may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, printing, photocopying, recording or otherwise, without the
prior permission of Infosys Limited and/ or any named intellectual property rights holders under this document.
Infosys.com | NYSE: INFY
Stay Connected
|
- Downloads last month
- 1