Official (ISC)2 Guide To The CCSP CBK

User Manual:

Open the PDF directly: View PDF PDF.
Page Count: 563 [warning: Documents this large are best viewed by clicking the View PDF Link!]

The Official (ISC)2® Guide
to the CCSPSM CBK®
The Official (ISC)2® Guide
to the CCSPSM CBK®
The Ofcial (ISC) Guide to the CCSPSM CBK®
Published by
John Wiley & Sons, Inc.
10475 Crosspoint Boulevard
Indianapolis, IN 46256
Copyright © 2016 by (ISC)
Published by John Wiley & Sons, Inc., Indianapolis, Indiana
Published simultaneously in Canada
ISBN: 978-1-119-20749-8
ISBN: 978-1-119-24421-9 (ebk)
ISBN: 978-1-119-20750-4 (ebk)
Manufactured in the United States of America
10 9 8 7 6 5 4 3 2 1
No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means,
electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108
of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization
through payment of the appropriate per-copy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers,
MA 01923, (978) 750-8400, fax (978) 646-8600. Requests to the Publisher for permission should be addressed to the
Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011,
fax (201) 748-6008, or online at
Limit of Liability/Disclaimer of Warranty: The publisher and the author make no representations or warranties with
respect to the accuracy or completeness of the contents of this work and specically disclaim all warranties, including
without limitation warranties of tness for a particular purpose. No warranty may be created or extended by sales or
promotional materials. The advice and strategies contained herein may not be suitable for every situation. This work
is sold with the understanding that the publisher is not engaged in rendering legal, accounting, or other professional
services. If professional assistance is required, the services of a competent professional person should be sought. Neither
the publisher nor the author shall be liable for damages arising herefrom. The fact that an organization or Web site
is referred to in this work as a citation and/or a potential source of further information does not mean that the author
or the publisher endorses the information the organization or website may provide or recommendations it may make.
Further, readers should be aware that Internet websites listed in this work may have changed or disappeared between
when this work was written and when it is read.
For general information on our other products and services please contact our Customer Care Department within the
United States at (877) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002.
Wiley publishes in a variety of print and electronic formats and by print-on-demand. Some material included with
standard print versions of this book may not be included in e-books or in print-on-demand. If this book refers to media
such as a CD or DVD that is not included in the version you purchased, you may download this material at http:// For more information about Wiley products, visit
Library of Congress Control Number: 2015952619
Trademarks: Wiley and the Wiley logo, and the Sybex logo are trademarks or registered trademarks of John Wiley &
Sons, Inc. and/or its afliates, in the United States and other countries, and may not be used without written permis-
sion. (ISC)2, CCSP, and CBK are service marks or registered trademarks of International Information Systems Security
Certication Consortium, Inc. All other trademarks are the property of their respective owners. John Wiley & Sons,
Inc. is not associated with any product or vendor mentioned in this book.
About the Editor
Adam Gordon With over 25 years of experience as both an educator
and IT professional, Adam holds numerous professional IT certications
He is the author of several books and has achieved many awards, includ-
ing EC-Council Instructor of Excellence for 2006-07 and Top Technical
Instructor Worldwide, 2002-2003. Adam holds his bachelor’s degree in
international relations and his master’s degree in international political affairs from Florida
International University.
Adam has held a number of positions during his professional career including CISO, CTO,
consultant, and solutions architect. He has worked on many large implementations involving
multiple customer program teams for delivery.
Adam has been invited to lead projects for companies such as Microsoft, Citrix, Lloyds
Bank TSB, Campus Management, US Southern Command (SOUTHCOM), Amadeus, World
Fuel Services, and Seaboard Marine.
Additional editing of text, tables, and images was provided by Matt Desmond and Andrew
Schneiter, CISSP.
Project Editor
Kelly Talbot
Technical Editor
Adam Gordon
Production Manager
Kathleen Wisor
Copy Editor
Andrew Schneiter
Manager of Content
Development & Assembly
Mary Beth Wakeeld
Marketing Director
David Mayhew
Marketing Manager
Carrie Sherrill
Professional Technology &
Strategy Director
Barry Pruett
Business Manager
Amy Knies
Associate Publisher
Jim Minatel
Project Coordinator, Cover
Brent Savage
Cody Gates, Happenstance Type-O-Rama
Kim Wimpsett
Johnna VanHoose Dinse
Cover Designer
Mike Trent
Cover Image
Mike Trent
Foreword xix
Introduction xxi
Introduction 3
Drivers for Cloud Computing 4
Security/Risks and Benets 5
Cloud Computing Definitions 7
Cloud Computing Roles 12
Key Cloud Computing Characteristics 13
Cloud Transition Scenario 15
Building Blocks 16
Cloud Computing Activities 17
Cloud Service Categories 18
Infrastructure as a Service (IaaS) 18
Platform as a Service (PaaS) 20
Software as a Service (SaaS) 22
Cloud Deployment Models 24
The Public Cloud Model 24
The Private Cloud Model 24
The Hybrid Cloud Model 25
The Community Cloud Model 26
Cloud Cross-Cutting Aspects 26
Architecture Overview 26
Key Principles of an Enterprise Architecture 28
The NIST Cloud Technology Roadmap 29
Network Security and Perimeter 33
Cryptography 34
Encryption 34
Key Management 36
IAM and Access Control 38
Provisioning and De-Provisioning 38
Centralized Directory Services 39
Privileged User Management 39
Authorization and Access Management 40
Data and Media Sanitization 41
Vendor Lock-In 41
Cryptographic Erasure 42
Data Overwriting 42
Virtualization Security 43
The Hypervisor 43
Security Types 44
Common Threats 44
Data Breaches 45
Data Loss 45
Account or Service Trafc Hijacking 46
Insecure Interfaces and APIs 46
Denial of Service 47
Malicious Insiders 47
Abuse of Cloud Services 47
Insufcient Due Diligence 48
Shared Technology Vulnerabilities 48
Security Considerations for Different Cloud Categories 49
Infrastructure as a Services (IaaS) Security 49
Platform as a Service (PaaS) Security 52
Software as a Service (SaaS) Security 53
Open Web Application Security Project (OWASP) Top Ten Security Threats 55
Cloud Secure Data Lifecycle 57
Information/Data Governance Types 58
Business Continuity/Disaster Recovery Planning 58
Business Continuity Elements 59
Critical Success Factors 59
Important SLA Components 60
Cost-Benefit Analysis 61
Certification Against Criteria 63
System/Subsystem Product Certification 69
Summary 73
Review Questions 74
Notes 78
Contents ix
Introduction 83
The Cloud Data Lifecycle Phases 84
Location and Access of Data 86
Location 86
Access 86
Functions, Actors, and Controls of the Data 86
Key Data Functions 87
Controls 88
Process Overview 88
Tying It Together 89
Cloud Services, Products, and Solutions 89
Data Storage 90
Infrastructure as a Service (IaaS) 90
Platform as a Service (PaaS) 91
Software as a Service (SaaS) 92
Threats to Storage Types 93
Technologies Available to Address Threats 94
Relevant Data Security Technologies 94
Data Dispersion in Cloud Storage 95
Data Loss Prevention (DLP) 95
Encryption 98
Masking, Obfuscation, Anonymization, and Tokenization 105
Application of Security Strategy Technologies 109
Emerging Technologies 110
Bit Splitting 110
Homomorphic Encryption 111
Data Discovery 111
Data Discovery Approaches 112
Different Data Discovery Techniques 112
Data Discovery Issues 113
Challenges with Data Discovery in the Cloud 114
Data Classification 115
Data Classication Categories 116
Challenges with CloudData 116
Data Privacy Acts 117
Global P&DP Laws in the United States 117
Global P&DP Laws in the European Union (EU) 118
Global P&DP Laws in APEC 119
Differences Between Jurisdiction and Applicable Law 119
Essential Requirements in P&DP Laws 119
Typical Meanings for Common Privacy Terms 119
Privacy Roles for Customers and Service Providers 120
Responsibility Depending on the Type of Cloud Services 121
Implementation of Data Discovery 123
Classification of Discovered Sensitive Data 124
Mapping and Definition of Controls 127
Privacy Level Agreement (PLA) 128
PLAs vs. Essential P&DP Requirements Activity 128
Application of Defined Controls for Personally Identifiable Information (PII) 132
Cloud Security Alliance Cloud Controls Matrix (CCM) 133
Management Control for Privacy and Data Protection Measures 136
Data Rights Management Objectives 138
IRM Cloud Challenges 138
IRM Solutions 139
Data-Protection Policies 140
Data-Retention Policies 140
Data-Deletion Procedures and Mechanisms 141
Data Archiving Procedures and Mechanisms 143
Events 144
Event Sources 144
Identifying Event Attribute Requirements 146
Storage and Analysis of Data Events 148
Security and Information Event Management (SIEM) 148
Supporting Continuous Operations 150
Chain of Custody and Non-Repudiation 151
Summary 152
Review Questions 152
Notes 155
Introduction 159
The Physical Environment of the Cloud Infrastructure 159
Datacenter Design 160
Network and Communications in the Cloud 161
Network Functionality 162
Software Dened Networking (SDN) 162
The Compute Parameters of a Cloud Server 163
Virtualization 164
Scalability 164
The Hypervisor 164
Contents xi
Storage Issues in the Cloud 166
Object Storage 166
Management Plane 167
Management of Cloud Computing Risks 168
Risk Assessment/Analysis 169
Cloud Attack Vectors 172
Countermeasure Strategies Across the Cloud 172
Continuous Uptime 173
Automation of Controls 173
Access Controls 174
Physical and Environmental Protections 175
Key Regulations 175
Examples of Controls 175
Protecting Datacenter Facilities 175
System and Communication Protections 176
Automation of Conguration 177
Responsibilities of Protecting the Cloud System 177
Following the Data Lifecycle 178
Virtualization Systems Controls 178
Managing Identification, Authentication, and Authorization in the Cloud Infrastructure 180
Managing Identication 181
Managing Authentication 181
Managing Authorization 181
Accounting for Resources 181
Managing Identity and Access Management 182
Making Access Decisions 182
The Entitlement Process 182
The Access Control Decision-Making Process 183
Risk Audit Mechanisms 184
The Cloud Security Alliance Cloud Controls Matrix 185
Cloud Computing Audit Characteristics 185
Using a Virtual Machine (VM) 186
Understanding the Cloud Environment Related to BCDR 186
On-Premise, Cloud as BCDR 186
Cloud Consumer, Primary Provider BCDR 187
Cloud Consumer, Alternative Provider BCDR 187
BCDR Planning Factors 188
Relevant Cloud Infrastructure Characteristics 188
Understanding the Business Requirements Related to BCDR 189
Understanding the BCDR Risks 191
BCDR Risks Requiring Protection 191
BCDR Strategy Risks 191
Potential Concerns About the BCDR Scenarios 192
BCDR Strategies 192
Location 193
Data Replication 194
Functionality Replication 195
Planning, Preparing, and Provisioning 195
Failover Capability 195
Returning to Normal 196
Creating the BCDR Plan 196
The Scope of the BCDR Plan 196
Gathering Requirements and Context 196
Analysis of the Plan 197
Risk Assessment 197
Plan Design 198
Other Plan Considerations 198
Planning, Exercising, Assessing, and Maintaining the Plan 199
Test Plan Review 201
Testing and Acceptance to Production 204
Summary 204
Review Questions 205
Notes 207
Introduction 211
Determining Data Sensitivity and Importance 212
Understanding the Application Programming Interfaces (APIs) 212
Common Pitfalls of Cloud Security Application Deployment 213
On-Premise Does Not Always Transfer (and Vice Versa) 214
Not All Apps Are “Cloud-Ready” 214
Lack of Training and Awareness 215
Documentation and Guidelines (or Lack Thereof) 215
Complexities of Integration 215
Overarching Challenges 216
Awareness of Encryption Dependencies 217
Understanding the Software Development Lifecycle (SDLC)
Process for a Cloud Environment 217
Secure Operations Phase 218
Disposal Phase 219
Assessing Common Vulnerabilities 219
Cloud-Specific Risks 222
Threat Modeling 224
STRIDE Threat Model 224
Approved Application Programming Interfaces (APIs) 225
Contents xiii
Software Supply Chain (API) Management 225
Securing Open Source Software 226
Identity and Access Management (IAM) 226
Identity Management 227
Access Management 227
Federated Identity Management 227
Federation Standards 228
Federated Identity Providers 229
Federated Single Sign-on (SSO) 229
Multi-Factor Authentication 229
Supplemental Security Devices 230
Cryptography 231
Tokenization 232
Data Masking 232
Sandboxing 233
Application Virtualization 233
Cloud-Based Functional Data 234
Cloud-Secure Development Lifecycle 235
ISO/IEC 27034-1 236
Organizational Normative Framework (ONF) 236
Application Normative Framework (ANF) 237
Application Security Management Process (ASMP) 237
Application Security Testing 238
Static Application Security Testing (SAST) 238
Dynamic Application Security Testing (DAST) 239
Runtime Application Self Protection (RASP) 239
Vulnerability Assessments and Penetration Testing 239
Secure Code Reviews 240
Open Web Application Security Project (OWASP) Recommendations 240
Summary 241
Review Questions 241
Notes 243
Introduction 247
Modern Datacenters and Cloud Service Offerings 247
Factors That Impact Datacenter Design 247
Logical Design 248
Physical Design 250
Environmental Design Considerations 253
Multi-Vendor Pathway Connectivity (MVPC) 257
Implementing Physical Infrastructure for Cloud Environments 257
Enterprise Operations 258
Secure Configuration of Hardware: Specific Requirements 259
Best Practices for Servers 259
Best Practices for Storage Controllers 260
Network Controllers Best Practices 262
Virtual Switches Best Practices 263
Installation and Configuration of Virtualization Management Tools for the Host 264
Leading Practices 265
Running a Physical Infrastructure for Cloud Environments 265
Conguring Access Control and Secure KVM 269
Securing the Network Configuration 270
Network Isolation 270
Protecting VLANs 270
Using Transport Layer Security (TLS) 271
Using Domain Name System (DNS) 272
Using Internet Protocol Security (IPSec) 273
Identifying and Understanding Server Threats 274
Using Stand-Alone Hosts 275
Using Clustered Hosts 277
Resource Sharing 277
Distributed Resource Scheduling (DRS)/Compute Resource Scheduling 277
Accounting for Dynamic Operation 278
Using Storage Clusters 279
Clustered Storage Architectures 279
Storage Cluster Goals 279
Using Maintenance Mode 280
Providing High Availability on the Cloud 280
Measuring System Availability 280
Achieving High Availability 281
The Physical Infrastructure for Cloud Environments 281
Configuring Access Control for Remote Access 283
Performing Patch Management 285
The Patch Management Process 286
Examples of Automation 286
Challenges of Patch Management 287
Performance Monitoring 289
Outsourcing Monitoring 289
Hardware Monitoring 289
Redundant System Architecture 290
Monitoring Functions 290
Backing Up and Restoring the Host Configuration 291
Contents xv
Implementing Network Security Controls: Defense in Depth 292
Firewalls 292
Layered Security 293
Utilizing Honeypots 295
Conducting Vulnerability Assessments 296
Log Capture and Log Management 297
Using Security Information and Event Management (SIEM) 299
Developing a Management Plan 300
Maintenance 301
Orchestration 301
Building a Logical Infrastructure for Cloud Environments 302
Logical Design 302
Physical Design 302
Secure Conguration of Hardware-Specic Requirements 303
Running a Logical Infrastructure for Cloud Environments 304
Building a Secure Network Conguration 304
OS Hardening via Application Baseline 305
Availability of a Guest OS 307
Managing the Logical Infrastructure for Cloud Environments 307
Access Control for Remote Access 308
OS Baseline Compliance Monitoring and Remediation 309
Backing Up and Restoring the Guest OS Conguration 309
Implementation of Network Security Controls 310
Log Capture and Analysis 310
Management Plan Implementation Through the Management Plane 311
Ensuring Compliance with Regulations and Controls 311
Using an IT Service Management (ITSM) Solution 312
Considerations for Shadow IT 312
Operations Management 313
Information Security Management 314
Conguration Management 314
Change Management 315
Incident Management 319
Problem Management 322
Release and Deployment Management 322
Service Level Management 323
Availability Management 324
Capacity Management 324
Business Continuity Management 324
Continual Service Improvement (CSI) Management 325
How Management Processes Relate to Each Other 325
Incorporating Management Processes 327
Managing Risk in Logical and Physical Infrastructures 327
The Risk-Management Process Overview 328
Framing Risk 328
Risk Assessment 329
Risk Response 338
Risk Monitoring 344
Understanding the Collection and Preservation of Digital Evidence 344
Cloud Forensics Challenges 345
Data Access within Service Models 346
Forensics Readiness 347
Proper Methodologies for Forensic Collection of Data 347
The Chain of Custody 353
Evidence Management 355
Managing Communications with Relevant Parties 355
The Five Ws and One H 355
Communicating with Vendors/Partners 356
Communicating with Customers 357
Communicating with Regulators 358
Communicating with Other Stakeholders 359
Wrap Up: Data Breach Example 359
Summary 359
Review Questions 360
Notes 365
Introduction 371
International Legislation Conflicts 371
Legislative Concepts 372
Frameworks and Guidelines Relevant to Cloud Computing 374
Organization for Economic Cooperation and Development (OECD)—
Privacy & Security Guidelines 374
Asia Pacic Economic Cooperation (APEC) Privacy Framework 375
EU Data Protection Directive 375
General Data Protection Regulation 378
ePrivacy Directive 378
Beyond Frameworks and Guidelines 378
Common Legal Requirements 378
Legal Controls and Cloud Providers 380
eDiscovery 381
eDiscovery Challenges 381
Considerations and Responsibilities of eDiscovery 382
Reducing Risk 382
Conducting eDiscovery Investigations 383
Contents xvii
Cloud Forensics and ISO/IEC 27050-1 383
Protecting Personal Information in the Cloud 384
Differentiating Between Contractual and Regulated Personally
Identiable Information (PII) 385
Country-Specic Legislation and Regulations Related to
PII/Data Privacy/Data Protection 389
Auditing in the Cloud 398
Internal and External Audits 399
Types of Audit Reports 400
Impact of Requirement Programs by the Use of Cloud Services 402
Assuring Challenges of the Cloud and Virtualization 402
Information Gathering 404
Audit Scope 404
Cloud Auditing Goals 407
Audit Planning 407
Standard Privacy Requirements (ISO/IEC 27018) 410
Generally Accepted Privacy Principles (GAPP) 410
Internal Information Security Management System (ISMS) 411
The Value of an ISMS 412
Internal Information Security Controls System: ISO 27001:2013 Domains 412
Repeatability and Standardization 413
Implementing Policies 414
Organizational Policies 414
Functional Policies 415
Cloud Computing Policies 415
Bridging the Policy Gaps 416
Identifying and Involving the Relevant Stakeholders 416
Stakeholder Identication Challenges 417
Governance Challenges 417
Communication Coordination 418
Impact of Distributed IT Models 419
Communications/Clear Understanding 419
Coordination/Management of Activities 420
Governance of Processes/Activities 420
Coordination Is Key 421
Security Reporting 421
Understanding the Implications of the Cloud to Enterprise Risk Management 422
Risk Prole 423
Risk Appetite 423
Difference Between Data Owner/Controller and Data Custodian/Processor 423
Service Level Agreement (SLA) 424
Risk Mitigation 429
Risk-Management Metrics 429
Different Risk Frameworks 430
Understanding Outsourcing and Contract Design 432
Business Requirements 432
Vendor Management 433
Understanding Your Risk Exposure 433
Accountability of Compliance 434
Common Criteria Assurance Framework 434
CSA Security, Trust, and Assurance Registry (STAR) 435
Cloud Computing Certification: CCSL and CCSM 436
Contract Management 437
Importance of Identifying Challenges Early 438
Key Contract Components 438
Supply Chain Management 441
Supply Chain Risk 441
Cloud Security Alliance (CSA) Cloud Controls Matrix (CCM) 442
The ISO 28000:2007 Supply Chain Standard 442
Summary 443
Review Questions 444
Notes 446
Domain 1: Architectural Concepts and Design Requirements 449
Domain 2: Cloud Data Security 459
Domain 3: Cloud Platform and Infrastructure Security 469
Domain 4: Cloud Application Security 475
Domain 5: Operations 479
Domain 6: Legal and Compliance Issues 492
Notes 499
Index 535
are taking steps to leverage cloud infrastructure, software,
and services. This is a substantial undertaking that also
heightens the complexity of protecting and securing
data. As powerful as cloud computing is to organizations,
it’s essential to have qualied people who understand
information security risks and mitigation strategies for the
cloud. As the largest not-for-prot membership body of
certied information security professionals worldwide,
(ISC)² recognizes the need to identify and validate infor-
mation security competency in securing cloud services.
To help facilitate the knowledge you need to assure strong information secu-
rity in the cloud, I’m very pleased to present the rst edition of the Ofcial (ISC)2
Guide to the CCSP (Certied Cloud Security Professional) CBK. Drawing from a
comprehensive, up-to-date global body of knowledge, the CCSP CBK ensures that
you have the right information security knowledge and skills to be successful and
prepares you to achieve the CCSP credential.
(ISC)2 is proud to collaborate with the Cloud Security Alliance (CSA) to build
a unique credential that reects the most current and comprehensive best practices
for securing and optimizing cloud computing environments. To attain CCSP certi-
cation, candidates must have a minimum of ve years’ experience in IT, of which
three years must be in information security and one year in cloud computing. All
CCSP candidates must be able to demonstrate capabilities found in each of the six
CBK domains:
Architectural Concepts & Design Requirements
Cloud Data Security
xx Foreword
Cloud Platform and Infrastructure Security
Cloud Application Security
Legal and Compliance
The CCSP credential represents advanced knowledge and competency in cloud
security design, implementation, architecture, operation, controls, and immediate and
long-term responses.
Cloud computing has emerged as a critical area within IT that requires further secu-
rity considerations. According to the 2015 (ISC)² Global Information Security Workforce
Study, cloud computing is identied as the top area for information security, with a growing
demand for education and training within the next three years. In correlation to the demand
for education and training, 73 percent of more than 13,000 survey respondents believe that
cloud computing will require information security professionals to develop new skills.
If you are ready to take control of the cloud, the Ofcial (ISC)2 Guide to the CCSP
CBK prepares you to securely implement and manage cloud services within your orga-
nization’s IT strategy and governance requirements. And, CCSP credential holders will
achieve the highest standard for cloud security expertise—managing the power of cloud
computing while keeping sensitive data secure.
The recognized leader in the eld of information security education and certication,
(ISC)2 promotes the development of information security professionals throughout the
world. As a CCSP with all the benets of (ISC)2 membership, you would join a global
network of more than 100,000 certied professionals who are working to inspire a safe
and secure cyber world.
Qualied people are the key to cloud security. This is your opportunity to gain the
knowledge and skills you need to protect and secure data in the cloud.
David P. Shearer, CISSP, PMP
Chief Executive Ofcer (CEO)
THERE ARE TWO MAIN requirements that must be met in order to achieve
the status of CCSP; one must take and pass the certication exam and be able to
demonstrate a minimum of ve years of cumulative paid full-time information
technology experience, of which three years must be in information security and
one year in one of the six domains of the CCSP examination. A rm understand-
ing of what the six domains of the CCSP CBK are, and how they relate to the
landscape of business, is a vital element in successfully being able to meet both
requirements and claim the CCSP credential. The mapping of the six domains of
the CCSP CBK to the job responsibilities of the Information Security professional
in today’s world can take many paths, based on a variety of factors such as industry
vertical, regulatory oversight and compliance, geography, as well as public versus
private versus military as the overarching framework for employment in the rst
place. In addition, considerations such as cultural practices and differences in lan-
guage and meaning can also play a substantive role in the interpretation of what
aspects of the CBK will mean and how they will be implemented in any given
It is not the purpose of this book to attempt to address all of these issues or pro-
vide a denitive proscription as to what is “the” path forward in all areas. Rather,
it is to provide the ofcial guide to the CCSP CBK and, in so doing, to lay out the
information necessary to understand what the CBK is and how it is used to build
the foundation for the CCSP and its role in business today. Being able to map the
CCSP CBK to your knowledge, experience, and understanding is the way that you
xxii Introduction
will be able to translate the CBK into actionable and tangible elements for both the busi-
ness and its users that you represent.
1. The Architectural Concepts & Design Requirements domain focuses on the build-
ing blocks of cloud-based systems. The CCSP will need to have an understanding
of Cloud Computing concepts such as denitions based on the ISO/IEC 17788
standard, roles like the Cloud Service Customer, Provider, and Partner, character-
istics such as multi-tenancy, measured services, and rapid elasticity and scalability,
as well as building block technologies of the cloud such as virtualization, storage,
and networking. The Cloud Reference Architecture will need to be described
and understood, with a focus on areas such as Cloud Computing Activities as
described in ISO/IEC 17789, Clause 9, Cloud Service Capabilities, Categories,
Deployment Models, and the Cross-Cutting Aspects of Cloud Platform architec-
ture and design such as interoperability, portability, governance, service levels,
and performance. In addition, the CCSP should have a clear understanding of
the relevant security and design principles for Cloud Computing, such as cryptog-
raphy, access control, virtualization security, functional security requirements like
vendor lock-in and interoperability, what a secure data lifecycle is for cloud-based
data, and how to carry out a cost-benet analysis of cloud-based systems. The
ability to identify what a trusted cloud service is and what role certication against
criteria plays in that identication using standards such as the Common Criteria
and FIPS 140-2 are also areas of focus for this domain.
2. The Cloud Data Security domain contains the concepts, principles, structures,
and standards used to design, implement, monitor, and secure, operating sys-
tems, equipment, networks, applications, and those controls used to enforce
various levels of condentiality, integrity, and availability. The CCSP will need
to understand and implement Data Discovery and Classication Technologies
pertinent to cloud platforms, as well as being able to design and implement rele-
vant jurisdictional data protections for Personally Identiable Information (PII),
such as data privacy acts and the ability to map and dene controls within the
cloud. Designing and implementing Data Rights Management (DRM) solutions
with the appropriate tools and planning for the implementation of data retention,
deletion, and archiving policies are activities that a CCSP will need to understand
how to undertake. The design and implementation of auditability, traceability,
and accountability of data within cloud based systems through the use of data
event logging, chain of custody and non-repudiation, and the ability to store and
analyze data through the use of security information and event management
(SIEM) systems are also discussed within the Cloud Data Security domain.
3. The Cloud Platform and Infrastructure Security domain covers knowledge of the
cloud infrastructure components, both the physical and virtual, existing threats,
and mitigating and developing plans to deal with those threats. Risk management
is the identication, measurement, and control of loss associated with adverse
events. It includes overall security review, risk analysis, selection and evaluation
of safeguards, cost-benet analysis, management decisions, safeguard implemen-
tation, and effectiveness review. The CCSP is expected to understand risk man-
agement including risk analysis, threats and vulnerabilities, asset identication,
and risk management tools and techniques. In addition, the candidate will need
to understand how to design and plan for the use of security controls such as
audit mechanisms, physical and environmental protection, and the management
of Identication, Authentication, and Authorization solutions within the cloud
infrastructures they manage. Business Continuity Planning (BCP) facilitates the
rapid recovery of business operations to reduce the overall impact of the disaster,
through ensuring continuity of the critical business functions. Disaster Recovery
Planning (DRP) includes procedures for emergency response, extended backup
operations, and post-disaster recovery when the computer installation suffers loss
of computer resources and physical facilities. The CCSP is expected to under-
stand how to prepare a business continuity or disaster recovery plan, techniques
and concepts, identication of critical data and systems, and nally the recovery
of the lost data within cloud infrastructures.
4. The Cloud Application Security domain focuses on issues to ensure that the need
for training and awareness in application security, the processes involved with
cloud software assurance and validation, and the use of veried secure software
are understood. The domain refers to the controls that are included within sys-
tems and applications software and the steps used in their development (e.g.,
SDLC). The CCSP should fully understand the security and controls of the
development process, system life cycle, application controls, change controls,
program interfaces, and concepts used to ensure data and application integrity,
security, and availability. In addition, the need to understand how to design appro-
priate Identity and Access Management (IAM) solutions for cloud-based systems
is important as well.
5. The Operations domain is used to identify critical information and the execution
of selected measures that eliminate or reduce adversary exploitation of critical
information. The domain examines the requirements of the cloud architecture,
from planning of the Data Center design and implementation of the physical and
logical infrastructure for the cloud environment, to running and managing that
infrastructure. It includes the denition of the controls over hardware, media, and
xxiv Introduction
the operators with access privileges to any of these resources. Auditing and mon-
itoring are the mechanisms, tools, and facilities that permit the identication of
security events and subsequent actions to identify the key elements and report the
pertinent information to the appropriate individual, group, or process. The need
for compliance with regulations and controls through the applications of frame-
works such as ITIL and ISO/IEC 20000 are also discussed. In addition, the impor-
tance of risk assessment across both the logical and physical infrastructures and
the management of communication with all relevant parties is focused on. The
CCSP is expected to know the resources that must be protected, the privileges
that must be restricted, the control mechanisms available, the potential for abuse
of access, the appropriate controls, and the principles of good practice.
6. The Legal and Compliance domain addresses ethical behavior and compliance
with regulatory frameworks. It includes the investigative measures and techniques
that can be used to determine if a crime has been committed and methods used
to gather evidence (e.g., Legal Controls, eDiscovery, and Forensics). This domain
also includes an understanding of privacy issues and audit process and methodol-
ogies required for a cloud environment, such as internal and external audit con-
trols, assurance issues associated with virtualization and the cloud, and the types
of audit reporting specic to the cloud (e.g., SAS, SSAE, and ISAE). Further,
examining and understanding the implications that cloud environments have in
relation to enterprise risk management and the impact of outsourcing for design
and hosting of these systems are also important considerations that many organiza-
tions face today.
To help you get the most from the text, we’ve used a number of conventions throughout
the book.
Warning Warnings draw attention to important information that is directly relevant to the
surrounding text.
note Notes discuss helpful information related to the current discussion.
As for styles in the text, we show URLs within the text like so:
Architectural Concepts
and Design Requirements
tHe goaL oF tHe Architectural Concepts and Design Requirements domain
is to provide you with knowledge of the building blocks necessary to
develop cloud-based systems.
You will be introduced to cloud computing concepts with regard to top-
ics such as the customer, provider, partner, measured services, scalability, vir-
tualization, storage, and networking. You will also be able to understand the
cloud reference architecture based on activities defined by industry-standard
Lastly, you will gain knowledge in relevant security and design princi-
ples for cloud computing, including secure data lifecycle and cost-benefit
analysis of cloud-based systems.
DOMAIN 1 Architectural Concepts and Design Requirements Domain2
After completing this domain, you will be able to:
Define the various roles, characteristics, and technologies as they relate to cloud
computing concepts
Describe cloud computing concepts as they relate to cloud computing activities,
capabilities, categories, models, and cross-cutting aspects
Identify the design principles necessary for secure cloud computing
Define the various design principles for the different types of cloud categories
Describe the design principles for secure cloud computing
Identify criteria specific to national, international, and industry for certifying trusted
cloud services
Identify criteria specific to the system and subsystem product certification
Introduction 3
“Cloud computing is a model for enabling ubiquitous, convenient, on-
demand network access to a shared pool of congurable computing resources
(e.g., networks, servers, storage, applications, and services) that can be rap-
idly provisioned and released with minimal management effort or service
provider interaction.
NIST (National Institutes for Standards and Technology) Denition
of Cloud Computing1
Cloud computing (Figure1.1) is the use of Internet-based computing resources, typically
“as a service,” to allow internal or external customers to consume where scalable and elas-
tic IT-enabled capabilities are provided.
FigUre1.1 Cloud computing overview
Cloud computing, or “cloud,” means many things to many people. There are indeed
various denitions for cloud computing and what it means from many of the leading
standards bodies. The previous NIST denition is the most commonly utilized, cited by
professionals and others alike to clarify what the term “cloud” means.
In summary, cloud computing is similar to the electricity or power grid: you pay for
what you use, it is always on (depending on your geographic location!), and it is available
to everyone who is connected to the grid (cloud). Note that the term “cloud computing”
originates from network diagrams/illustrations where the Internet is typically depicted
as a “cloud.
DOMAIN 1 Architectural Concepts and Design Requirements Domain4
It’s important to note the difference between a Cloud Service Provider (CSP) and
a Managed Service Provider (MSP). The main difference is to be found in the control
exerted over the data and process and by who. In an MSP, the consumer dictates the
technology and operating procedures. According to the MSP Alliance, MSPs typically
have the following distinguishing characteristics:2
Some form of Network Operation Center (NOC) service
Some form of help desk service
Remotely monitor and manage all or a majority of the objects for the customer
Proactively maintain the objects under management for the customer
Delivery of these solutions with some form of predictable billing model, where
the customer knows with great accuracy what their regular IT management
expense will be
With a CSP, the service provider dictates both the technology and the operational
procedures being made available to the cloud consumer. This will mean that the CSP is
offering some or all of the components of cloud computing through a Software as a Ser-
vice (SaaS), Infrastructure as a Service (IaaS), or Platform as a Service (PaaS) model.
Drivers for Cloud Computing
There are many drivers that may move a company to consider cloud computing. These
may include the costs associated with the ownership of their current IT infrastructure
solutions, as well as projected costs to continue to maintain these solutions year in and
year out (Figure1.2).
FigUre1.2 Drivers that move companies toward cloud computing
Additional drivers include (but are not limited to):
The desire to reduce IT complexity
Risk reduction: Users can use the cloud to test ideas and concepts before
making major investments in technology.
Scalability: Users have access to a large number of resources that scale
based on user demand.
Introduction 5
Elasticity: The environment transparently manages a user’s resource utilization
based on dynamically changing needs.
Consumption-based pricing
Virtualization: Each user has a single view of the available resources,
independently of how they are arranged in terms of physical devices.
Cost: The pay-per-usage model allows an organization to pay only for the
resources they need with basically no investment in the physical resources avail-
able in the cloud. There are no infrastructure maintenance or upgrade costs.
Business agility
Mobility: Users have the ability to access data and applications from around
the globe.
Collaboration/Innovation: Users are starting to see the cloud as a way to work
simultaneously on common data and information.
Security/Risks and Benefits
You cannot bring up or discuss the topic of cloud computing without hearing the words
“security,” “risk,” and “compliance.” In truth, cloud computing does pose challenges and
represents a paradigm shift in the way in which technology solutions are being delivered.
As with any notable change, this brings about questions and a requirement for clear and
concise understandings and interpretations to be obtained, both from a customer and
provider perspective. The Cloud Security Professional will need to play a key role in the
dialogue within the organization as it pertains to cloud computing, its role, the opportu-
nity costs, and the associated risks (Figure1.3).
FigUre1.3 Cloud computing issues and concerns
DOMAIN 1 Architectural Concepts and Design Requirements Domain6
Risk can take many forms in an organization. The organization needs to weigh all
the risks associated with a business decision carefully before engaging in an activity, in
order to attempt to minimize the risk impact associated with an activity. There are many
approaches and frameworks that can be used to address risk in an organization such as
COBIT, the COSO Enterprise Risk Management Integrated Framework, and the NIST
Risk Management Framework. Organizations need to become “risk aware” in general,
focusing on risks within and around the organization that may cause harm to the reputa-
tion of the business. Reputational risk can be dened as “ the loss of value of a brand or
the ability of an organization to persuade.3 In order to manage reputational risk, an orga-
nization should consider the following items:
Strategic alignment
Effective board oversight
Integration of risk into strategy setting and business planning
Cultural alignment
Strong corporate values and a focus on compliance
Operational focus
Strong control environment
While many people reference cloud technologies as being “less secure,” or carrying
greater risk, this is simply not possible or acceptable to say unless making a direct and
measured comparison against a specied environment or service. For instance, it would be
incorrect to simply assume or state that cloud computing is less secure as a service modality
for the delivery of a Customer Relationship Management (CRM) platform than a “more
traditional” CRM application model, calling for an on-premise installation of the CRM
application and its supporting infrastructure and databases. To assess the true level of secu-
rity and risk associated with each model of ownership and consumption, the two platforms
would need to be compared across a wide range of factors and issues, allowing for a side-by-
side comparison of the key deliverables and issues associated with each model.
In truth, the cloud may be more or less secure than your organization’s environment
and current security controls depending on any number of factors, which include the
technological components, risk management processes, preventative, detective, and cor-
rective controls, governance and oversight processes, resilience and continuity capabili-
ties, defense in depth, multiple factor authentication, and so on.
Therefore, the approach to security will vary depending on the provider and the abil-
ity for your organization to alter and amend its overall security posture, prior to, during,
and after migration or utilization of cloud services.
Cloud Computing Definitions 7
In the same way that no two organizations or entities are the same, neither are two
cloud providers. A one-size-ts-all approach is never good for security, so do not settle for
it when utilizing cloud-based services.
The extensive use of automation within cloud environments enables real-time
monitoring and reporting on security control points. This drive transitions to continu-
ous security monitoring regimes, which can enhance the overall security posture of the
organization consuming the cloud services. The benets realized by the organization can
include greater security visibility, enhanced policy/governance enforcement, and better
framework for management of the extended business ecosystem through a transition from
an infrastructure-centric to a data-centric security model.
The following list forms a common set of terms and phrases you will need to become
familiar with as a Cloud Security Professional. Having an understanding of these terms
will put you in a strong position to communicate and understand technologies, deploy-
ments, solutions, and architectures within the organization as needed. This list is not
comprehensive and should be used along with the vocabulary terms in Appendix B to
form as complete a picture as possible of the language of cloud computing.
Anything as a Service (XaaS): The growing diversity of services available
over the Internet via cloud computing as opposed to being provided locally,
or on-premises.
Apache CloudStack: An open source cloud computing and Infrastructure as a
Service (IaaS) platform developed to help make creating, deploying, and manag-
ing cloud services easier by providing a complete “stack” of features and compo-
nents for cloud environments.
Business Continuity: The capability of the organization to continue delivery
of products or services at acceptable predened levels following a disruptive
Business Continuity Management: A holistic management process that identies
potential threats to an organization and the impacts to business operations those
threats, if realized, might cause, and that provides a framework for building orga-
nizational resilience with the capability of an effective response that safeguards the
interests of its key stakeholders, reputation, brand, and value-creating activities.
Business Continuity Plan: The creation of a strategy through the recognition of
threats and risks facing a company, with an eye to ensure that personnel and assets
are protected and able to function in the event of a disaster.
DOMAIN 1 Architectural Concepts and Design Requirements Domain8
Cloud App (Cloud Application): Short for cloud application, cloud app
describes a software application that is never installed on a local computer.
Instead, it is accessed via the Internet.
Cloud Application Management for Platforms (CAMP): CAMP is a speci-
cation designed to ease management of applications—including packaging and
deployment—across public and private cloud computing platforms.
Cloud Backup: Cloud backup, or cloud computer backup, refers to backing
up data to a remote, cloud-based server. As a form of cloud storage, cloud backup
data is stored in and accessible from multiple distributed and connected resources
that comprise a cloud.
Cloud Backup Service Provider: A third-party entity that manages and distributes
remote, cloud-based data backup services and solutions to customers from a cen-
tral datacenter.
Cloud Backup Solutions: Cloud backup solutions enable enterprises or individ-
uals to store their data and computer les on the Internet using a storage service
provider, rather than storing the data locally on a physical disk, such as a hard
drive or tape backup.
Cloud Computing: A type of computing, comparable to grid computing, that
relies on sharing computing resources rather than having local servers or personal
devices to handle applications. The goal of cloud computing is to apply traditional
supercomputing, or high-performance computing power, normally used by mili-
tary and research facilities, to perform tens of trillions of computations per second
in consumer-oriented applications such as nancial portfolios, or even to deliver
personalized information or power-immersive computer games.
Cloud Computing Accounting Software: Cloud computing accounting software
is accounting software that is hosted on remote servers. It provides accounting
capabilities to businesses in a fashion similar to the SaaS (Software as a Service)
business model. Data is sent into the cloud, where it is processed and returned
to the user. All application functions are performed off-site, not on the user’s
Cloud Computing Reseller: A company that purchases hosting services from a
cloud server hosting or cloud computing provider and then re-sells them to its
own customers.
Cloud Database: A database accessible to clients from the cloud and delivered
to users on demand via the Internet. Also referred to as Database as a Service
(DBaaS), cloud databases can use cloud computing to achieve optimized scaling,
high availability, multi-tenancy, and effective resource allocation.
Cloud Computing Definitions 9
Cloud Enablement: The process of making available one or more of the follow-
ing services and infrastructures to create a public cloud computing environment:
cloud provider, client, and application.
Cloud Management: Software and technologies designed for operating and mon-
itoring the applications, data, and services residing in the cloud.Cloud manage-
ment tools help ensure a company’s cloud computing-based resources are working
optimally and properly interacting with users and other services.
Cloud Migration: The process of transitioning all or part of a company’s data,
applications, and services from on-site premises behind the rewall to the cloud,
where the information can be provided over the Internet on an on-demand basis.
Cloud OS: A phrase frequently used in place ofPlatform as a Service (PaaS)to
denote an association to cloud computing.
Cloud Portability: In cloud computing terminology, this refers to the ability to
move applications and their associated data between one cloud provider and
another—or between public and private cloud environments.
Cloud Provider: A service provider who offers customers storage or software
solutions available via a public network, usually the Internet. The cloud provider
dictates both the technology and operational procedures involved.
Cloud Provisioning: The deployment of a company’s cloud computing strategy,
which typically rst involves selecting which applications and services will reside
in the public cloud and which will remain on-site behind the rewall or in the
private cloud.Cloud provisioning also entails developing the processes for inter-
facing with the cloud’s applications and services as well as auditing and monitor-
ing who accesses and utilizes the resources.
Cloud Server Hosting: A type of hosting in which hosting services are made avail-
able to customers on demand via the Internet.Rather than being provided by a
single server or virtual server, cloud server hosting services are provided by multi-
ple connected servers that comprise a cloud.
Cloud Storage: “The storage of data online in the cloud,” whereby a company’s
data is stored in and accessible from multiple distributed and connected resources
that comprise a cloud.
Cloud Testing: Load and performance testing conducted on the applications and
services provided via cloud computing—particularly the capability to access these
services—in order to ensure optimal performance and scalability under a wide
variety of conditions.
DOMAIN 1 Architectural Concepts and Design Requirements Domain10
Desktop as a Service (DaaS): A form of virtual desktop infrastructure (VDI) in
which the VDI is outsourced and handled by a third party. Also called hosted
desktop services, Desktop as a Service is frequently delivered as a cloud service
along with the apps needed for use on the virtual desktop.
Enterprise Application: Describes applications—or software—that a business
uses to assist the organization in solving enterprise problems. When the word
“enterprise” is combined with “application,” it usually refers to a software platform
that is too large and complex for individual or small business use.
Enterprise Cloud Backup: Enterprise-grade cloud backup solutions typically
add essential features such as archiving and disaster recovery to cloud backup
Eucalyptus: An open source cloud computing and Infrastructure as a Service
(IaaS) platform for enabling private clouds.
Event: A change of state that has signicance for the management of an IT service
or other conguration item. The term can also be used to mean an alert or noti-
cation created by an IT service, conguration item, or monitoring tool. Events
often require IT operations staff to take actions and lead to incidents being logged.
Host: A device providing a service.
Hybrid Cloud Storage: A combination of public cloud storage and private cloud
storage where some critical data resides in the enterprise’s private cloud and other
data is stored and accessible from a public cloud storage provider.
Incident: An unplanned interruption to an IT service or reduction in the quality
of an IT service.
Infrastructure as a Service (IaaS): IaaS is dened as computer infrastructure,
such as virtualization, being delivered as a service. IaaS is popular in the data-
center where software and servers are purchased as a fully outsourced service and
usually billed on usage and how much of the resource is used—compared with
the traditional method of buying software and servers outright.
Managed Service Provider: An IT service provider where the customer dictates
both the technology and operational procedures
Mean Time Between Failure (MTBF): The measure of the average time
between failures of a specic component, or part of a system.
Mean Time To Repair (MTTR): The measure of the average time it should take
to repair a failed component, or part of a system.
Cloud Computing Definitions 11
Mobile Cloud Storage: A form of cloud storage that applies to storing an individ-
ual’s mobile device data in the cloud and providing the individual with access to
the data from anywhere.
Multi-Tenant: In cloud computing, multi-tenant is the phrase used to describe
multiple customers using the same public cloud.
Node: A physical connection.
Online Backup: In storage technology, online backup means to back up data
from your hard drive to a remote server or computer using a network connection.
Online backup technology leverages the Internet and cloud computing to create
an attractive off-site storage solution with few hardware requirements for any busi-
ness of any size.
Personal Cloud Storage: A form of cloud storage that applies to storing an indi-
vidual’s data in the cloud and providing the individual with access to the data
from anywhere. Personal cloud storage also often enables syncing and sharing
stored data across multiple devices such as mobile phones and tablet computers.
Platform as a Service (PaaS): The process of deploying onto the cloud infrastruc-
ture consumer-created or acquired applications that are created using programming
languages, libraries, services, and tools supported by the provider. The consumer
does not manage or control the underlying cloud infrastructure including network,
servers, operating systems, or storage, but has control over the deployed applications
and possibly the conguration settings for the application-hosting environment.
Private Cloud: Describes a cloud computing platform that is implemented
within the corporate rewall, under the control of the IT department. A private
cloud is designed to offer the same features and benets of cloud systems but
removes a number of objections to the cloud computing model, including control
over enterprise and customer data, worries about security, and issues connected to
regulatory compliance.
Private Cloud Project: Companies initiate private cloud projects to enable their
IT infrastructure to become more capable of quickly adapting to continually
evolving business needs and requirements. Private cloud projects can also be
connected to public clouds to create hybrid clouds.
Private Cloud Security: A private cloud implementation aims to avoid many of
the objections regarding cloud computing security. Because a private cloud setup
is implemented safely within the corporate rewall, it remains under the control
of the IT department.
DOMAIN 1 Architectural Concepts and Design Requirements Domain12
Private Cloud Storage: A form of cloud storage where the enterprise data and
cloud storage resources both reside within the enterprise’s datacenter and behind
the rewall.
Problem: The unknown cause of one or more incidents, often identied as a
result of multiple similar incidents.
Public Cloud Storage: A form of cloud storage where the enterprise and storage
service provider are separate and the data is stored outside of the enterprise’s
Recovery Point Objective (RPO): The Recovery Point Objective (RPO) helps
determine how much information must be recovered and restored, or another
way of looking at RPO is to ask yourself, “how much data can the company afford
to lose?”
Recovery Time Objective (RTO): A time measure of how fast you need each
system to be up and running in the event of a disaster or critical failure.
Software as a Service (SaaS): A software delivery method that provides access to
software and its functions remotely as a web-based service. Software as a Service
allows organizations to access business functionality at a cost typically less than
paying for licensed applications since SaaS pricing is based on a monthly fee.
Storage Cloud: Refers to the collection of multiple distributed and connected
resources responsible for storing and managing data online in the cloud.
Vertical Cloud Computing: Describes the optimization of cloud computing and
cloud services for a particular vertical (e.g., a specic industry) or specic-use
Virtual Host: A software implementation of a physical host.
The following groups form the key roles and functions associated with cloud computing.
They do not constitute an exhaustive list, but highlight the main roles and functions
within cloud computing:
Cloud Customer: An individual or entity that utilizes or subscribes to cloud-
based services or resources.
Cloud Provider: A company that provides cloud-based platform, infrastructure,
application, or storage services to other organizations and/or individuals, usually
for a fee, otherwise known to clients “as a service.
Key Cloud Computing Characteristics 13
Cloud Backup Service Provider: A third-party entity that manages and holds
operational responsibilities for cloud-based data backup services and solutions to
customers from a central datacenter.
Cloud Services Broker (CSB): Typically a third-party entity or company that
looks to extend or enhance value to multiple customers of cloud-based services
through relationships with multiple cloud service providers. It acts as a liaison
between cloud services customers and cloud service providers, selecting the best
provider for each customer and monitoring the services. The CSB can be utilized
as a “middleman” to broker the best deal and customize services to the customer’s
requirements. May also resell cloud services.
Cloud Service Auditor: Third-party organization that veries attainment of SLAs
(service level agreements).
Think of the following as a rulebook or a set of laws when dealing with cloud computing.
If a service or solution does not meet all of the following key characteristics, it is not true
cloud computing.
On-Demand Self-Service: The cloud service(s) provided that enables the provision
of cloud resources on demand (i.e., whenever and wherever they are required).
From a security perspective, this has introduced challenges to governing the use and
provisioning of cloud-based services, which may violate organizational policies.
By its nature, on-demand self-service does not require procurement, provisioning,
or approval from nance, and as such, can be provisioned by almost anyone with
a credit card. Note: For enterprise customers, this is most likely the least import-
ant characteristic, as self-service for the majority of end users is not of utmost
Broad Network Access: The cloud, by its nature is an “always on” and “always
accessible” offering for users to have widespread access to resources, data, and
other assets. Think convenience—access what you want, when you need it, from
any location.
In theory, all you should require is Internet access and relevant credentials and
tokens, which give you access to the resources.
The mobile device and smart device revolution that is altering the way organi-
zations fundamentally operate has introduced an interesting dynamic into the
cloud conversation within many organizations. These devices should also be able
DOMAIN 1 Architectural Concepts and Design Requirements Domain14
to access the relevant resources that a user may require; however, compatibility
issues, the inability to apply security controls effectively, and non-standardization
of platforms and software systems has stemmed this somewhat.
Resource Pooling: Lies at the heart of all that is good about cloud computing.
More often than not, traditional, non-cloud systems may see utilization rates for
their resources of between 80–90% for a few hours a week and rates at an aver-
age of 10–20% for the remainder. What the cloud looks to do is to group (pool)
resources for use across the user landscape or multiple clients, which can then
scale and adjust to the user or client’s needs, based on their workload or resource
requirements. Cloud providers typically have large numbers of resources avail-
able, from hundreds to thousands of servers, network devices, applications, and so
on, which can accommodate large volumes of customers and can prioritize and
facilitate appropriate resourcing for each client.
Rapid Elasticity: Allows the user to obtain additional resources, storage, compute
power, and so on, as the user’s need or workload requires. This is more often “trans-
parent” to the user, with more resources added as necessary in a seamless manner.
As cloud services utilize the “pay per use” concept, you pay for what you use, this
is of particular benet to seasonal or event-type businesses utilizing cloud services.
Think of a provider selling 100,000 tickets for a major sporting event or concert.
Leading up to the ticket release date, little to no compute resources are needed;
however, once the tickets go on sale, they may need to accommodate 100,000 users
in the space of 30–40 minutes. This is where rapid elasticity and cloud computing
can really be benecial, compared with traditional IT deployments, which would
have to invest heavily using Capital Expenditure (CapEx) to have the ability to sup-
port such demand.
Measured Service: Cloud computing offers a unique and important component
that traditional IT deployments have struggled to provide—resource usage can be
measured, controlled, reported, and alerted upon, which results in multiple bene-
ts and overall transparency between the provider and client. In the same way you
may have a metered electricity service or a mobile phone that you “top-up” with
credit, these services allow you to control and be aware of costs. Essentially, you
pay for what you use and have the ability to get an itemized bill or breakdown
of usage.
A key benet being availed by many proactive organizations is the ability to
charge departments or business units for their use of services, thus allowing IT
and nance to quantify exact usage and costs per department or by business
function—something that was incredibly difcult to achieve in traditional IT
Cloud Transition Scenario 15
In theory and in practice, cloud computing should have large resource pools to
enable swift scaling, rapid movement, and exibility to meet your needs at any given time
within the bounds of your service subscription.
Without all of these characteristics, it is simply not possible for the user to be con-
dent and assured that the delivery and continuity of services will be maintained in line
with potential growth or sudden scaling (either upward or downward). Without pooling
and measured services, you cannot implement the cloud computing economic model.
Consider the following scenario:
Due to competitive pressures, XYZ Corp is hoping to better leverage the economic
and scalable nature of cloud computing. These policies have driven XYZ Corp toward
the consideration of a hybrid cloud model that consists of enterprise private and public
cloud use. While security risk has driven many of the conversations, a risk management
approach has allowed the company to separate its data assets into two segments, sensitive
and non-sensitive. IT governance guidelines must now be applied across the entire cloud
platform and infrastructure security environment. This will also impact infrastructure
operational options. XYZ Corp must now apply cloud architectural concepts and design
requirements that would best align with corporate business and security goals.
As a Cloud Security Professional, you have several issues to address in order to help
guide XYZ Corp through its planned transition to a cloud architecture.
1. What cloud deployment model(s) would need to be assessed in order to select the
appropriate ones for the enterprise architecture?
a. Based on the choice(s) made, additional issues may become apparent, such as:
i. Who will the audiences be?
ii. What types of data will they be using and storing?
iii. How will secure access to the cloud be enabled, audited, managed, and
iv. When/where will access be granted to the cloud? Under what constraints
(time, location, platform, etc.)?
2. What cloud service model(s) would need to be chosen for the enterprise
a. Based on the choice(s) made, additional issues may become apparent, such as:
i. Who will the audiences be?
ii. What types of data will they be using and storing?
DOMAIN 1 Architectural Concepts and Design Requirements Domain16
iii. How will secure access to the cloud service be enabled, audited, managed,
and removed?
iv. When/where will access be granted to the cloud service? Under what con-
straints (time, location, platform, etc.)?
Dealing with a scenario such as this would require the CCSP to work with the stake-
holders in XYZ Corp to seek answers to the questions posed. In addition, the CCSP
would want to carefully consider the information in Table1.1 with regards to crafting a
taBLe1.1 Possible Solutions
Hybrid cloud model Outsourced hosting in partnership with on-premise
IT support
Risk management driven data
Data classification scheme implemented company-wide
IT Governance guidelines Coordination of all Governance, Risk, and Compliance
(GRC) activities within XYZ through a Chief Risk Officer
(CRO) role
Cloud architecture alignment with
business requirements
Requirements gathering and documentation exercise
driven by a Project Management Office (PMO) or a Busi-
ness Analyst (BA) function
The building blocks of cloud computing are comprised of RAM, CPU, storage, and
networking. IaaS comprises the most fundamental building blocks of any cloud service:
the processing, storage, and network infrastructure upon which all cloud applications
are built. In a typical IaaS scenario, the service provider delivers the server, storage, and
networking hardware and its virtualization, and then it’s up to the customer to implement
the operating systems, middleware, and applications they require.
Cloud Computing Activities 17
As with traditional computing and technology environments, there are a number of roles
and activities that are essential for creating, designing, implementing, testing, auditing,
and maintaining the relevant assets. The same is true for cloud computing, with the fol-
lowing key roles representing a sample of the fundamental components and personnel
required to operate cloud environments:
Cloud Administrator: This individual is typically responsible for the implemen-
tation, monitoring, and maintenance of the cloud within the organization or on
behalf of an organization (acting as a third party).
Most notably, this role involves the implementation of policies, permissions,
access to resources, and so on. The Cloud Administrator works directly with Sys-
tem, Network, and Cloud Storage Administrators.
Cloud Application Architect: This person is typically responsible for adapting,
porting, or deploying an application to a target cloud environment.
The main focus of this role is to work closely and alongside development and
other design and implementation resources to ensure that an application’s per-
formance, reliability, and security are all maintained throughout the lifecycle of
the application. This requires continuous assessment, verication, and testing to
occur throughout the various phases of the SDLC.
Most architects represent a mix or blend of system administration experience and
domain-specic expertise—giving insight to the OS, domain, and other compo-
nents, while identifying potential reasons why the application may be experienc-
ing performance degradation or other negative impacts.
Cloud Architect: This role will determine when and how a private cloud meets
the policies and needs of an organization’s strategic goals and contractual require-
ments (from a technical perspective).
The Cloud Architect is also responsible for designing the private cloud, is
involved in hybrid cloud deployments and instances, and has a key role in
understanding and evaluating technologies, vendors, services, and other skillsets
needed to deploy the private cloud or to establish and function the hybrid cloud
Cloud Data Architect: This individual is similar to the Cloud Architect; the Data
Architect’s role is to ensure the various storage types and mechanisms utilized
within the cloud environment meet and conform to the relevant SLAs and that
the storage components are functioning according to their specied requirements.
DOMAIN 1 Architectural Concepts and Design Requirements Domain18
Cloud Developer: This person focuses on development for the cloud infrastructure
itself. This role can vary from client tools or solutions engagements through to sys-
tems components. While developers can operate independently or as part of a team,
regular interactions with Cloud Administrators and security practitioners will be
required for debugging, code reviews, and relevant security assessment remediation
Cloud Operator: This individual is responsible for daily operational tasks and
duties that focus on cloud maintenance and monitoring activities.
Cloud Service Manager: This person typically responsible for policy design, busi-
ness agreement, pricing model, and some elements of the SLA (not necessarily
the legal components or amendments that will require contractual amendments).
This role works closely with cloud management and customers to reach agree-
ment and alongside the Cloud Administrator to implement SLAs and policies on
behalf of the customers.
Cloud Storage Administrator: This role focuses on relevant user groups and the
mapping, segregations, bandwidth, and reliability of storage volumes assigned.
Additionally, this role may require ensuring that conformance to relevant
SLAs continues to be met, working with and alongside Network and Cloud
Cloud User/Cloud Customer: This individual is a user accessing either paid
for or free cloud services and resources within a cloud. These users are generally
granted System Administrator privileges to the instances they start (and only those
instances, as opposed to the host itself or to other components).
Cloud service categories fall into three main groups—IaaS, PaaS, and SaaS. They are
each discussed in the following sections.
Infrastructure as a Service (IaaS)
According to the NIST Denition of Cloud Computing, in IaaS, “the capability provided
to the consumer is to provision processing, storage, networks, and other fundamental
computing resources where the consumer is able to deploy and run arbitrary software,
which can include operating systems and applications. The consumer does not manage
or control the underlying cloud infrastructure but has control over operating systems,
Cloud Service Categories 19
storage, and deployed applications; and possibly limited control of select networking
components (e.g., host rewalls).4
Traditionally, infrastructure has always been the focal point for ensuring which capa-
bilities and organization’s requirements could be met, versus those that were restricted. It
also represented possibly the most signicant investments in terms of CapEx and skilled
resources made by the organization.
IaaS Key Components and Characteristics
The cloud has changed this signicantly. However, the following key components and
characteristics remain in order to meet and achieve the relevant requirements:
Scale: The necessity and requirement for automation and tools to support the
potentially signicant workloads of either internal users or those across multiple
cloud deployments (dependent on which cloud service offering) is a key compo-
nent of IaaS. Users and customers require optimal levels of visibility, control, and
assurances related to the infrastructure and its ability to satisfy their requirements.
Converged network and IT capacity pool: This follows on from the scale focus,
however, it looks to drill into the virtualization and service management compo-
nents required to cover and provide appropriate levels of service across network
From a customer or user perspective, the pool appears seamless and endless (no
visible barriers or restrictions, along with minimal requirement to initiate addi-
tional resource) for both the servers and the network. These are (or should be)
driven and focused at all times in supporting and meeting relevant platform and
application SLAs.
Self-service and on-demand capacity: This requires an online resource or cus-
tomer portal that allows the customers to have complete visibility and awareness
of the virtual IaaS environment they currently utilize. It additionally allows cus-
tomers to acquire, remove, manage, and report on resources, without the need to
engage or speak with resources internally or with the provider.
High reliability and resilience: In order to be effective, the requirement for
automated distribution across the virtualized infrastructure (LAN and WAN)
is increasing and affording resilience, while enforcing and meeting SLA
DOMAIN 1 Architectural Concepts and Design Requirements Domain20
IaaS Key Benefits
Infrastructure as a Service has a number of key benets for organizations, which include
but are not limited to
Usage metered and priced on the basis of units (or instances) consumed. This can
also be billed back to specic departments or functions.
The ability to scale up and down of infrastructure services based on actual usage.
This is particularly useful and benecial when there are signicant spikes and
dips within the usage curve for infrastructure.
Reduced cost of ownership. There is no need to buy any assets for everyday use,
no loss of asset value over time, and reduced costs of maintenance and support.
Reduced energy and cooling costs along with “green IT” environment effect with
optimum use of IT resources and systems.
Signicant and notable providers in the IaaS space include Amazon, AT&T, Rack-
space, Verizon/Terremark, HP, and OpenStack, among others.
Platform as a Service (PaaS)
According to the NIST Denition of Cloud Computing, in PaaS, “the capability pro-
vided to the consumer is to deploy onto the cloud infrastructure consumer-created or
acquired applications created using programming languages, libraries, services, and tools
supported by the provider. The consumer does not manage or control the underlying
cloud infrastructure, including network, servers, operating systems, or storage, but has
control over the deployed applications and possibly conguration settings for the applica-
tion-hosting environment.5
PaaS and the cloud platform components have revolutionized the manner in which
development and software has been delivered to customers and users over the past few
years. The barrier for entry in terms of costs, resources, capabilities, and ease of use have
dramatically reduced “time to market”—promoting and harvesting the innovative culture
within many organizations.
PaaS Key Capabilities and Characteristics
Outside of the key benets, PaaS should have the following key capabilities and
Support multiple languages and frameworks: PaaS should support multiple
programming languages and frameworks, thus enabling the developers to code in
whichever language they prefer or whatever the design requirements specify.
Cloud Service Categories 21
In recent times, signicant strides and efforts have been taken to ensure that open
source stacks are both supported and utilized, thus reducing “lock-in” or issues
with interoperability when changing cloud providers.
Multiple hosting environments: The ability to support a wide choice and variety
of underlying hosting environments for the platform is key to meeting customer
requirements and demands. Whether public cloud, private cloud, local hypervi-
sor, or bare metal, supporting multiple hosting environments allows the applica-
tion developer or administrator to migrate the application when and as required.
This can also be used as a form of contingency and continuity and to ensure ongo-
ing availability.
Flexibility: Traditionally, platform providers provided features and requirements
that they felt suited the client requirements, along with what suited their service
offering and positioned them as the provider of choice, with limited options for
the customers to move easily.
This has changed drastically, with extensibility and exibility now offered to meet
the needs and requirements of developer audiences. This has been heavily inu-
enced by open source, which allows relevant plugins to be quickly and efciently
introduced into the platform.
Allow choice and reduce “lock-in”: Learning from previous horror stories and
restrictions, proprietary meant red tape, barriers, and restrictions on what devel-
opers could do when it came to migration or adding features and components to
the platform. While the requirement to code to specic APIs was made available
by the provider, they could run their apps in various environments based on com-
monality and standard API structures, ensuring a level of consistency and quality
for customers and users.
Ability to “auto-scale”: This enables the application to seamlessly scale up and
down as required to accommodate the cyclical demands of users. The platform
will allocate resources and assign these to the application, as required. This serves
as a key driver for any seasonal organizations that experience “spikes” and “drops”
in usage.
PaaS Key Benefits
PaaS has a number of key benets for developers, which include but are not limited to:
Operating systems can be changed and upgraded frequently, including associated
features and system services.
DOMAIN 1 Architectural Concepts and Design Requirements Domain22
Globally distributed development teams are able to work together on software
development projects within the same environment.
Services are available and can be obtained from diverse sources that cross national
and international boundaries.
Upfront and recurring or ongoing costs can be signicantly reduced by uti-
lizing a single vendor instead of maintaining multiple hardware facilities and
Signicant and notable providers in the PaaS space include Microsoft, OpenStack,
and Google, among others.
Software as a Service (SaaS)
According to the NIST Denition of Cloud Computing, in SaaS, “The capability pro-
vided to the consumer is to use the provider’s applications running on a cloud infrastruc-
ture. The applications are accessible from various client devices through either a thin
client interface, such as a web browser (e.g., web-based email), or a program interface.
The consumer does not manage or control the underlying cloud infrastructure including
networks, servers, operating systems, storage, or even individual application capabilities,
with the possible exception of limited user-specic application conguration settings.6
SaaS Delivery Models
Within SaaS, two delivery models are currently used:
Hosted Application Management (hosted AM): The provider hosts commer-
cially available software for customers and delivers it over the web (Internet).
Software on Demand: The cloud provider gives customers network-based
access to a single copy of an application created specically for SaaS distribution
(typically within the same network segment).
SaaS Benefits
Cloud computing provides signicant and potentially limitless possibilities for organi-
zations to run programs and applications that may previously have not been practical or
feasible given the limitations of their own systems, infrastructure, or resources.
When utilizing and deploying the right middleware and associated components,
the ability to run and execute programs with the exibility, scalability, and on-demand
Cloud Service Categories 23
self-service capabilities can present massive incentives and benets with regards to scal-
ability, usability, reliability, productivity, and cost savings.
Clients can access their applications and data from anywhere at any time. They can
access the cloud computing system using any computer linked to the Internet. Other
capabilities and benets related to the application include
Overall reduction of costs: Cloud deployments reduce the need for advanced
hardware to be deployed on the client side. Essentially, requirements to purchase
high specication systems, redundancy, storage, and so on, to support applications
are no longer necessary. From a customer perspective, a device to connect to the
relevant application with the appropriate middleware is all that should be required.
Application and software licensing: Customers no longer need to purchase licenses,
support, and associated costs, as licensing is “leased” and is relevant only when in use
(covered by the provider). Additionally, purchasing of bulk licensing and the associ-
ated CapEx is removed and replaced by a pay-per-use licensing model.
Reduced support costs: Customers save money on support issues, as the rele-
vant cloud provider handles them. Appropriately managed, owned, and operated
streamlined hardware would, in theory, have fewer problems than a network of
heterogeneous machines and operating systems.
Backend systems and capabilities: Where applications back onto grid and cloud
environments, the ability to pull processing and compute power to assist with
resource intensive tasks.
SaaS has a number of key benets for organizations, which include but are not limited to
Ease of use and limited/minimal administration.
Automatic updates and patch management. The user will always be running the
latest version and most up-to-date deployment of the software release as well as
any relevant security updates (no manual patching required).
Standardization and compatibility. All users have the same version of the software
Global accessibility.
Signicant and notable providers in the SaaS space include Microsoft, Google, Sales-, Oracle, and SAP, among others.
DOMAIN 1 Architectural Concepts and Design Requirements Domain24
Cloud deployment models fall into four main types of clouds: public, private, hybrid, and
Now that you are equipped with an understanding and appreciation of the cloud
service types, we will examine how these services are merged into the relevant deploy-
ment models. The selection of a cloud deployment model will depend on any number of
factors and may well be heavily inuenced by your organization’s risk appetite, cost, com-
pliance and regulatory requirements, legal obligations, along with other internal business
decisions and strategy.
The Public Cloud Model
According to NIST, “the cloud infrastructure is provisioned for open use by the
general public. It may be owned, managed, and operated by a business, academic, or
government organization, or some combination of them. It exists on the premises of
the cloud provider.7
Public Cloud Benefits
Key drivers or benets of a public cloud typically include
Easy and inexpensive setup because hardware, application, and bandwidth costs
are covered by the provider
Streamlined and easy-to-provision resources
Scalability to meet customer needs
No wasted resources—pay as you consume
Given the increasing demands for public cloud services, many providers are now
offering and re-modeling their services as public cloud offerings. Signicant and notable
providers in the public cloud space include Amazon, Microsoft, Salesforce, and Google,
among others.
The Private Cloud Model
According to NIST, “the cloud infrastructure is provisioned for exclusive use by a single
organization comprising multiple consumers (e.g., business units). It may be owned,
managed, and operated by the organization, a third party, or some combination of them,
and it may exist on or off premises.8
A private cloud is typically managed by the organization it serves; however, outsourc-
ing the general management of this to trusted third parties may also be an option. A
Cloud Deployment Models 25
private cloud is typically only available to the entity or organization, its employees, con-
tractors, and selected third parties.
The private cloud is also sometimes referred to as the “internal” or “organizational cloud.
Private Cloud Benefits
Key drivers or benets of a private cloud typically include
Increased control over data, underlying systems, and applications
Ownership and retention of governance controls
Assurance over data location, removal of multiple jurisdiction legal and compli-
ance requirements
Private clouds are typically more popular among large, complex organizations with leg-
acy systems and heavily customized environments. Additionally, where signicant technol-
ogy investment has been made, it may be more nancially viable to utilize and incorporate
these investments within a private cloud environment than to discard or retire such devices.
The Hybrid Cloud Model
According to NIST, “the cloud infrastructure is a composition of two or more distinct
cloud infrastructures (private, community, or public) that remain unique entities, but are
bound together by standardized or proprietary technology that enables data and applica-
tion portability (e.g., cloud bursting for load balancing between clouds).9
Hybrid cloud computing is gaining in popularity, as it provides organizations with the
ability to retain control of their IT environments, coupled with the convenience of allow-
ing organizations to use public cloud service to fulll non-mission-critical workloads and
taking advantage of exibility, scalability, and cost savings.
Hybrid Cloud Benefits
Key drivers or benets of hybrid cloud deployments include
Retain ownership and oversight of critical tasks and processes related to
Re-use previous investments in technology within the organization.
Control the most critical business components and systems.
Cost-effective means to fullling non-critical business functions (utilizing public
cloud components).
“Cloud bursting” and disaster recovery can be enhanced by hybrid cloud deploy-
ments; “cloud bursting” allows for public cloud resources to be utilized when a
private cloud workload has reached maximum capacity.
DOMAIN 1 Architectural Concepts and Design Requirements Domain26
The Community Cloud Model
According to NIST, “the cloud infrastructure is provisioned for exclusive use by a specic
community of consumers from organizations that have shared concerns (e.g., mission,
security requirements, policy, and compliance considerations). It may be owned, man-
aged, and operated by one or more of the organizations in the community, a third party,
or some combination of them, and it may exist on or off premises.10
Community clouds can be on-premise or off-site and should give the benets of a
public cloud deployment, while providing heightened levels of privacy, security, and regu-
latory compliance.
The deployment of cloud solutions, by its nature, is often deemed a technology deci-
sion; however, it’s truly a business alignment decision. While cloud computing no doubt
enables technology to be delivered and utilized in a unique manner, potentially unleash-
ing multiple benets, the choice to deploy and consume cloud services should be a busi-
ness decision, taken in line with the business or organization’s overall strategy.
Why is it a business decision, you ask? Two distinct reasons:
All technology decisions should be made with the overall business direction and
strategy at the core.
When it comes to funding and creating opportunities, these should be made at a
business level.
The ability of a cloud transition to directly support organizational business or mission
goals and to express that message in a business manner will be the difference between a
successful project and a failed project in the eyes of the organization.
Architecture Overview
The architect is a planner, strategist, and consultant who sees the “big picture” of the
organization. He understands current needs, thinks strategically, and plans long into
the future. Perhaps the most important role of the architect today is to understand the
business and how to design the systems that the business will require. This allows the
architect to determine which system types, development, and congurations meet the
identied business requirements while addressing any security concerns.
Enterprise security architecture provides the conceptual design of network secu-
rity infrastructure and related security mechanisms, policies, and procedures. It links
Cloud Cross-Cutting Aspects 27
components of the security infrastructure as a cohesive unit with the goal of protecting
corporate information. The Cloud Security Alliance provides a general enterprise archi-
tecture (Figure 1.4). The Cloud Security Alliance Enterprise Architecture is located at
FigUre1.4 CSA Enterprise Architecture
See the following sections for a starting point to reference the building blocks of the
CSA Enterprise Architecture.
Sherwood Applied Business Security Architecture (SABSA)11
SABSA includes the following components which can be used separately or together:
Business Requirements Engineering Framework
Risk and Opportunity Management Framework
Policy Architecture Framework
Security Services-Oriented Architecture Framework
Governance Framework
Security Domain Framework
Through-Life Security Service Management and Performance Management
DOMAIN 1 Architectural Concepts and Design Requirements Domain28
I.T. Infrastructure Library (ITIL)12
I.T. Infrastructure Library (ITIL) is a group of documents that are used in implementing
a framework for IT Service Management. ITIL forms a customizable framework that
denes how service management is applied throughout an organization. ITIL is orga-
nized into a series of ve volumes: Service Strategy, Service Design, Service Transition,
Service Operation, and Continual Service Improvement.
The Open Group Architecture Framework (TOGAF)13
TOGAF is one of many frameworks available to the cloud security professional for devel-
oping an enterprise architecture. TOGAF provides a standardized approach that can be
used to address business needs by providing a common lexicon for business communica-
tion. In addition, TOGAF is based on open methods and approaches to enterprise archi-
tecture, allowing the business to avoid a “lock-in” scenario due to the use of proprietary
approaches. TOGAF also provides for the ability to quantiably measure Return on Inve-
stement (ROI), allowing the business to use resources more efciently.
Jericho/Open Group14
The Jericho forum now is part of the Open Group Security Forum. The Jericho
Forum Cloud Cube Model can be found at
Key Principles of an Enterprise Architecture
The following principles should be adhered to at all times:
Dene protections that enable trust in the cloud,
Develop cross-platform capabilities and patterns for proprietary and open source
Facilitate trusted and efcient access, administration, and resiliency to the
Provide direction to secure information that is protected by regulations.
The architecture must facilitate proper and efcient identication, authentica-
tion, authorization, administration, and auditability.
Centralize security policy, maintenance operation, and oversight functions.
Access to information must be secure yet still easy to obtain.
Delegate or federate access control where appropriate.
Must be easy to adopt and consume, supporting the design of security patterns.
Cloud Cross-Cutting Aspects 29
The architecture must be elastic, exible, and resilient, supporting multi-tenant,
multi-landlord platforms.
The architecture must address and support multiple levels of protection, includ-
ing network, operating system, and application security needs.
The NIST Cloud Technology Roadmap
The NIST Cloud Technology Roadmap helps cloud providers develop industry-
recommended, secure, and interoperable identity, access, and compliance management
congurations, and practices. It provides guidance and recommendations for enabling
security architects, enterprise architects, and risk-management professionals to leverage a
common set of solutions that fulll their common needs to be able to assess where their
internal IT and cloud providers are in terms of security capabilities and to plan a road-
map to meet the security needs of their business.15
There are a number of key components that the Cloud Security Professional should
comprehensively review and understand in order to determine which controls and tech-
niques may be required to adequately address the requirements discussed in the following
Interoperability denes how easy it is to move and reuse application components regardless
of the provider, platform, OS, infrastructure, location, storage, and the format of data or APIs.
Standards-based products, processes, and services are essential for entities to ensure that
Investments do not become prematurely technologically obsolete.
Organizations are able to easily change cloud service providers to exibly and
cost-effectively support their mission.
Organizations can economically acquire commercial and develop private clouds
using standards-based products, processes, and services.
Interoperability mandates that those components should be replaceable by new or dif-
ferent components from different providers and continue to work, as should the exchange
of data between systems.
Portability is a key aspect to consider when selecting cloud providers since it can both
help prevent vendor lock-in and deliver business benets by allowing identical cloud
deployments to occur in different cloud provider solutions, either for the purposes of
disaster recovery or for the global deployment of a distributed single solution.
DOMAIN 1 Architectural Concepts and Design Requirements Domain30
Systems and resource availability denes the success or failure of a cloud-based service.
As a single point of failure for cloud-based services, where the service or cloud deploy-
ment loses availability, the customer is unable to access their target assets or resources,
resulting in downtime.
In many cases, cloud providers are required to provide upward of 99.9% availability as
per the service level agreement (SLA). Failure to do so can result in penalties, reimburse-
ment of fees, loss of customers, loss of condence, and ultimately brand and reputational
For many customers and potential cloud users, security remains the biggest concern, with
security continuing to act as a barrier preventing them from engaging with cloud services.
As with any successful security program, the ability to measure, obtain assurance, and
integrate contractual obligations to minimum levels of security are the keys to success.
Many cloud providers now list their typical or minimum levels of security but will not
list or publicly state specic security controls for fear of being targeted by attackers who
would have the knowledge necessary to successfully compromise their networks.
Where such contracts and engagements require specic security controls and tech-
niques to be applied, these are typically seen as “extras.” They incur additional costs and
require that the relevant non-disclosure agreements (NDAs) be completed before engag-
ing in active discussions.
In many cases, for smaller organizations, a move to cloud-based services will signi-
cantly enhance their security controls, given that they may not have access to or possess
the relevant security capabilities of a large-scale cloud computing provider.
The general rule of thumb for security controls and requirements in cloud-based
environments is based on “if you want additional security, additional cost will be
incurred.” You can have almost whatever you want when it comes to cloud security—just
as long as you can nd the right provider and you are willing to pay for it.
In the world of cloud computing, privacy presents a major challenge for both customer
and providers alike. The reason for this is simple: no uniform or international privacy
directives, laws, regulations, or controls exist, leading to a separate, disparate, and seg-
mented mesh of laws and regulations being applicable depending on the geographic
location where the information may reside (data at rest) or be transmitted (data in
Cloud Cross-Cutting Aspects 31
While many of the leading providers of cloud services make provisions to ensure the
location and legislative requirements (including contractual obligations) are met, this
should never be taken as a “given” and should be specied within relevant service level
agreements (SLAs) and contracts. Given the true global nature and various international
locations of cloud computing datacenters, the potential for data to reside in two, three, or
more locations around the world at any given time is a real possibility.
For many European entities and organizations, failure to ensure appropriate provi-
sions and controls have been applied could violate EU Data Protection laws and obliga-
tions that could lead to various issues and implications.
Within Europe, privacy is seen as a human right and as such should be treated with
the utmost respect. Not bypassing the various state laws across the United States and
other geographic locations can make the job of the cloud architect extremely complex,
requiring an intricate level of knowledge and controls to ensure that no such violations or
breaches of privacy and data protection occur.
Cloud resiliency represents the ability of a cloud services datacenter and its associated
components, including servers, storage, and so on, to continue operating in the event of a
disruption, which may be equipment failure, power outage, or a natural disaster. In sum-
mary, resilience represents the ability to continue service and business operations in the
event of a disruption or event.
Given that most cloud providers have a signicantly higher number of devices and
redundancy in place than a standard “in-house” IT team, resiliency should typically be
far higher, with equipment and capabilities being ready to failover, multiple layers of
redundancy, and enhanced exercises to test such capabilities.
Cloud computing and high performance should go hand in hand at all times. Let’s face
it—if the performance is poor, you may not be a customer for very long. In order for opti-
mum performance to be experienced through the use of cloud services, the provisioning,
elasticity, and other associated components should always focus on performance.
In the same fashion as you may wish to travel really fast by boat, the speed at which
you can travel is dependent on the engine and the boat design. The same applies for
performance, which at all times should be focused on the network, the computer, the
storage, and the data.
With these four elements inuencing the design, integration, and development activi-
ties, performance should be boosted and enhanced throughout. FYI: It is always harder to
rene and amend performance once design and development have been completed.
DOMAIN 1 Architectural Concepts and Design Requirements Domain32
The term “governance” relating to processes and decisions looks to dene actions, assign
responsibilities, and verify performance. The same can be said and adopted for cloud
services and environments where the goal is to secure applications and data when in
transit and at rest. In many cases, cloud governance is an extension of the existing orga-
nizational or traditional business process governance, with a slightly altered risk and
controls landscape.
While governance is required from the commencement of a cloud strategy or cloud
migration roadmap, it is seen as a recurring activity and should be performed on an ongo-
ing basis.
A key benet of many cloud-based services is the ability to access relevant reporting,
metrics, and up-to-date statistics related to usage, actions, activities, downtime, outages,
updates, and so on. This may enhance and streamline governance and oversight activities
with the addition of scheduled and automated reporting.
Note that processes, procedures, and activities may require revision post-migration or
movement to a cloud-based environment. Not all processes remain the same, with segre-
gation of duties, reporting, and incident management forming a sample of processes that
may require revision after the cloud migration.
Service Level Agreements (SLAs)
Think of a rulebook and legal contract all rolled into one document—that’s what you
have in terms of an SLA. In the SLA, the minimum levels of service, availability, security,
controls, processes, communications, support, and many other crucial business elements
will be stated and agreed upon by both parties.
While many may argue that the SLAs are heavily weighted in favor of the cloud ser-
vice provider, there are a number of key benets when compared with traditional-based
environments or “in-house IT.” These include downtime, upgrades, updates, patching,
vulnerability testing, application coding, test and development, support, and release
management. Many of these require the provider to take these areas and activities very
seriously, as failing to do so will have an impact on their bottom line.
Note that not all SLAs cover the areas or focus points with which you may have issues
or concerns. When this is not the case, every effort should be made to obtain clarity prior
to engaging with the cloud provider services. If you think it is time-consuming moving to
cloud environments, wait until you try to get out.
Auditability allows for users and the organization to access, report, and obtain evidence of
actions, controls, and processes that were performed or run by a specied user.
Network Security and Perimeter 33
Similar to standard audit trails and systems logging, systems auditing and reporting are
offered as “standard” by many of the leading cloud providers.
From a customer perspective, increased condence and the ability to have evidence to
support audits, reviews, or assessments of object-level or systems-level access form key drivers.
From a stakeholder, management, and assessment perspective, auditability provides
mechanisms to review, assess, and report user and systems activities. Auditability in non-
cloud environments can focus on nancial reporting, while cloud-based auditability
focuses on actions and activities of users and systems.
Regulatory Compliance
Regulatory compliance is an organization’s requirement to adhere to relevant laws, regu-
lations, guidelines, and specications relevant to its business, specically dictated by the
nature, operations, and functions it provides or utilizes to its customers. Where the organi-
zation fails to meet or violates regulatory compliance regulations, punishment can include
legal actions, nes, and, in limited cases, halting business operations or practices.
Key regulatory areas that are often included in cloud-based environments include
(but are not limited to) the Payment Card Industry Data Security Standard (PCI DSS),
the Health Insurance Portability and Accountability Act (HIPAA), the Federal Informa-
tion Security Management Act (FISMA), and the Sarbanes-Oxley Act (SOX).
Network security looks to cover all relevant security components of the underlying physi-
cal environment and the logical security controls that are inherent in the service or avail-
able to be consumed as a service (SaaS, PaaS, and IaaS). Two key elements need to be
drawn out at this point:
Physical environment security ensures that access to the cloud service is ade-
quately distributed, monitored, and protected by underlying physical resources
within which the service is built.
Logical network security controls consist of link, protocol, and application layer
As a cloud customer and a cloud provider, both data and systems security are of utmost
importance. The goal from both sides is to ensure the ongoing availability, integrity, and
condentiality of all systems and resources. Failure to do so will have negative impacts
from a customer, condence, brand awareness, and overall security posture standpoint.
DOMAIN 1 Architectural Concepts and Design Requirements Domain34
Taking into account that cloud computing requires a high volume of constant con-
nections to and from the network devices, the “always on/always available” elements are
necessary and essential.
In the cloud environments, the classic denition of a network perimeter takes on dif-
ferent meanings under different guises and deployment models.
For many cloud networks, the perimeter is clearly the demarcation point.
For other cloud networks, the perimeter transforms into a series of highly dynamic
“micro-borders” around individual customer solutions or services (to the level of
certain datasets/ows within a solution) within the same cloud, consisting of vir-
tual network components.
In other cloud networks, there is no clear perimeter at all. While the network may
be typically viewed as a perimeter and a number of devices within those perimeters
communicating both internally and externally, this may be somewhat less clear
and segregated in cloud computing networks.
Next, we will look at some of the “bolt on” components that look to strengthen and
enhance the overall security posture of cloud-based networks, how they can be utilized,
and why they play a fundamental function in technology deployments today.
The need for the use of cryptography and encryption is universal for the provisioning and
protection of condentiality services in the enterprise. In support of that goal, the CCSP
will want to ensure that they understand how to deploy and use cryptography services in a
cloud environment. In addition, the need for strong key management services and a secure
key management life cycle are also important to integrate into the cryptography solution.
The need for condentiality along with the requirement to apply additional security
controls and mechanisms to protect information and communications is great. Whether
it is encryption to a military standard or simply the use of self-signed certicates, we
all have different requirements and denitions of what a secure communications and
cryptography-based infrastructure looks like. As with many areas of security, encryption
can be subjective when you drill down into the algorithms, strengths, ciphers, symmet-
ric, asymmetric, and so on.
As a general rule of thumb, encryption mechanisms should be selected based on
the information and data they protect, while taking into account requirements for access
Cryptography 35
and general functions. The critical success factor for encryption is to enable secure and
legitimate access to resources, while protecting and enforcing controls against unautho-
rized access.
The Cloud Architect and Administrator should explore the appropriate encryption
and access measures to ensure that proper separation of tenants’ information and access
is deployed within public cloud environments. Additionally, encryption and relevant con-
trols need to be applied to private and hybrid cloud deployments in order to adequately
and sufciently protect communications between hosts and services across various net-
work components and systems.
Data in Transit (Data in Motion)
Also described or termed “data in motion,” data in transit focuses on information or
data while in transmission across systems and components typically across internal and
external (untrusted) networks. Where information is crossing or traversing trusted and
untrusted networks, the opportunity for interception, snifng, or unauthorized access is
Data in transit can include the following scenarios:
Data transiting from an end user endpoint (laptop, desktop, smart device, etc.) on
the Internet to a web-facing service in the cloud
Data moving between machines within the cloud (including between different
cloud services), for example, between a web virtual machine and a database
Data traversing trusted and untrusted networks (cloud- and non-cloud-based
Typically, the Cloud Architect is responsible for reviewing how data in transit will be
protected or secured at the design phase. Special consideration should be focused on how
the cloud will integrate, communicate, and allow for interoperability across boundaries
and hybrid technologies. Once implemented, the ongoing management and responsibil-
ity of data in transit resides in the correct application of security controls, including the
relevant cryptography processes to handle key management.
Perhaps the best-known use of cryptography for the Data in Transit scenario is Secure
Sockets Layer (SSL) and Transport Layer Security (TLS). TLS provides a transport layer
encrypted “tunnel” between email servers or mail transfer agents (MTAs), while SSL cer-
ticates encrypt private communications over the Internet using private and public keys.
While these cryptographic protocols have been in use for many years in the form
of HTTPS, typically to provide communication security over the Internet, it has now
become the standard and de facto encryption approach for browser-to-web host and host-
to-host communications in both cloud and non-cloud environments.
DOMAIN 1 Architectural Concepts and Design Requirements Domain36
Recent increases show a number of cloud-based providers using multiple factors of
encryption, coupled with the ability for users to encrypt their own data at rest within the
cloud environment. The use of symmetric cryptography for key exchange followed by
symmetric encryption for content condentiality is also increasing.
This approach looks to bolster and enhance standard encryption levels and strengths
of encryption. Additionally, IPsec, which has been used extensively, is another transit
encryption protocol widely used and adopted for VPN tunnels and makes use of cryptog-
raphy algorithms such as 3DES and AES.
Data at Rest
Data at rest focuses on information or data while stagnant or at rest (typically not in use)
within systems, networks, or storage volumes. When data is at rest, appropriate and suit-
able security controls need to be applied to ensure the ongoing condentiality and integ-
rity of information.
Encryption of stored data, or data at rest, continues to gain traction for both cloud-
based and non-cloud-based environments. The Cloud Architect is typically responsible
for the design and assessment of encryption algorithms for use within cloud environ-
ments. Of key importance both for security and performance is the deployment and
implementation of encryption on the target hosts and platforms.
The selection and testing of encryption form an essential component prior to ensur-
ing performance impacts. In some cases, encryption can impact performance.
User Interface (UI) response times and processor capabilities are up to a quarter or
even half of the processor in an unencrypted environment. This varies depending on the
type, strength, and algorithm. In high-performing environments with signicant processor
and utilization requirements, encryption of data at rest may not be included or utilized as
Encryption of data at rest provides, assists, and assures organizations that opportunities
for unauthorized access or viewing of data through information spills or residual data are
further reduced.
Note that when information is encrypted on the cloud provider side and in the event
of discrepancies or disputes with the providers, it may prove challenging to obtain or
extract your data.
Key Management
In the old traditional banking environments, it required two people with keys to the safe
to open it; this led to a reduced number of thefts, crimes, and bank robberies. Encryption,
as with bank processes, should never be handled or addressed by a single person.
Cryptography 37
Encryption and segregation of duties should always go hand in hand. Key manage-
ment should be separated from the provider hosting the data, and the data owners should
be positioned to make decisions (these may be in line with organizational policies) but
ultimately should be in a position to apply encryption, control, and manage key man-
agement processes, select the storage location for the encryption keys (on-premise in an
isolated location is typically the best security option), and retain ownership and responsi-
bilities for key management.
The Importance of Key Management
From a security perspective, you remove the dependency or assumption that the cloud
provider is handling the encryption processes and controls correctly.
Secondly, you are not bound or restricted by shared keys/data spillage within the
cloud environments, as you have a unique and separate encryption mechanism to apply
an additional level of security and condentiality at a data and transport level.
Common Approaches to Key Management
For cloud computing key management services, the following two approaches are most
commonly utilized:
Remote Key Management Service: This is where the customer maintains the
Key Management Service (KMS) on-premise. Ideally, the customer will own,
operate, and maintain the KMS, resulting in the customer controlling the infor-
mation condentiality, while the cloud provider can focus on the hosting, process-
ing, and availability of services.
Note that hybrid connectivity is required between the cloud provider and cloud
customer in order for the encryption and decryption to function.
Client Side Key Management: Similarly to the remote key management
approach, the client side approach looks to put the customer or cloud user in
complete control of the encryption and decryption keys.
The main difference here is that most of the processing and control is done on the
customer side. The cloud provider will provide the KMS; however, the KMS will
reside on the customer’s premises, where keys are generated, held, and retained
by the customer. Note that this approach is typically utilized for SaaS environ-
ments and cloud deployments.
DOMAIN 1 Architectural Concepts and Design Requirements Domain38
As with most areas of technology, access control is merging and aligning with other com-
bined activities—some of these are automated using single sign-on capabilities; others
operate in a standalone, segregated fashion.
The combination of access control and effective management of those technologies,
processes, and controls has given rise to Identity and Access Management (IAM). In a
nutshell, Identity and Access Management includes people, processes, and systems that
are used to manage access to enterprise resources. This is achieved by assuring that the
identity of an entity is veried (who are they, can they prove who they are) and then
granting the correct level of access based on the assets, services, and protected resources
being accessed.
IAM typically looks to utilize a minimum of two, preferably three, or more factors of
authentication. Within cloud environments, services should include strong authentica-
tion mechanisms for validating users’ identities and credentials. In line with best practice,
one-time passwords should be utilized as a risk reduction and mitigation technique.
The key phases that form the basis and foundation for IAM in the enterprise include
the following:
Provisioning and de-provisioning
Centralized directory services
Privileged user management
Authentication and access management
Each is discussed in the following sections.
Provisioning and De-Provisioning
Provisioning and de-provisioning are critical aspects of access management—think of
setting up and removing users. In the same way as you would set up an account for a user
entering your organization requiring access to resources, provisioning is the process of
creating accounts to allow users to access appropriate systems and resources within the
cloud environment.
The ultimate goal of user provisioning is to standardize, streamline, and create an
efcient account creation process, while creating a consistent, measurable, traceable, and
auditable framework for providing access to end users.
De-provisioning is the process whereby a user account is disabled when the user no
longer requires access to the cloud-based services and resources. This is not just limited to
a user leaving the organization but may also be due to a user changing a role, function, or
IAM and Access Control 39
De-provisioning is a risk-mitigation technique to ensure that “authorization creep” or
additional and historical privileges are not retained, thus granting access to data, assets,
and resources that are not necessary to fulll the job role.
Centralized Directory Services
As when building a house or large structure, the foundation is key. In the world of IAM,
the directory service forms the foundation for IAM and security both in an enterprise
environment and within a cloud deployment. A directory service stores, processes, and
facilitates a structured repository of information stored, coupled with unique identiers
and locations.
The primary protocol in relation to Centralized Directory Services is Lightweight
Directory Access Protocol (LDAP), built and focused on the X.500 standard.16 LDAP
works as an application protocol for querying and modifying items in directory service
providers like Active Directory. Active Directory is a database-based system that provides
authentication, directory, policy, and other services to a network.
Essentially, LDAP acts as a communication protocol to interact with Active Directory.
LDAP directory servers store their data hierarchically (similar to DNS trees/UNIX le
structures) with a directory record’s Distinguished Name (DN) read from the individual
entries back through the tree, up to the top level.
Each entry in an LDAP directory server is identied through a DN access to directory
services, should be part of the Identity and Access Management solution, and should be
as robust as the core authentication modes used.
The use of Privileged Identity Management features is strongly encouraged for man-
aging access of the administrators of the directory. If these are hosted locally rather than
in the cloud, the IAM service will require connectivity to the local LDAP servers, in addi-
tion to any applications and services for which it is managing access.
Within cloud environments, directory services are heavily utilized and depended
upon as the “go to” trusted source, by the Identity and Access Management framework as
a security repository of identity and access information. Again, trust and condence in the
accuracy and integrity of the directory services is a must.
Privileged User Management
As the names implies, Privileged User Management focuses on the process and ongoing
requirements to manage the lifecycle of user accounts with highest privileges in a system.
Privileged accounts typically carry the highest risk and impact, as compromised privi-
leged user accounts can lead to signicant permissions and access rights being obtained,
thus allowing the user/attacker to access resources and assets that may negatively impact
the organization.
DOMAIN 1 Architectural Concepts and Design Requirements Domain40
The key components from a security perspective relating to Privileged User Manage-
ment should, at a minimum, include the ability to track usage, authentication successes
and failures, authorization times and dates, log successful and failed events, enforce
password management, and contain sufcient levels of auditing and reporting related to
privileged user accounts.
Many organizations may monitor this level of information for “standard” or general
users, which would be benecial and useful in the event of an investigation; however, the
privileged accounts should capture this level of detail by default, as attackers will often
target and compromise a general or “standard user,” with the view to escalating privileges
to a more privileged or admin account. Not forgetting that a number of these components
are technical by nature, the overall requirements that are used to manage these should be
driven by organizational policies and procedures.
Note that segregation of duties can form an extremely effective mitigation and risk-
reduction technique around privileged users and their ability to affect major changes.
Authorization and Access Management
Access to devices, systems, and resources forms a key driver for use of cloud services
(Broad Network Access); without it, we reduce the overall benets that the service may
provide to the enterprise and by doing so isolate legitimate business or organizational
users from their resources and assets.
In the same way that users require authorization and access management to be oper-
ating and functioning in order to access the required resources, security also requires
these service components to be functional, operational, and trusted in order to enforce
security within cloud environments.
In its simplest form, authorization determines the user’s right to access a certain
resource (think of entry onto a plane with your reserved seat, or when you may be visiting
an ofcial residence or government agency to visit a specied person).
When we talk about access management, we focus on the manner and way in which
users can access relevant resources, based on their credentials and characteristics of their
identity (think of a bank or highly secure venue—only certain employees or personnel
can access the main safe or highly sensitive areas).
Note that both authorization and access management are “point-in-time activities”
that rely on the accuracy and ongoing availability of resources and functioning processes,
segregation of duties, privileged user management, password management, and so on, to
operate and provide the desired levels of security. In the event that one of the mentioned
activities is not carried out regularly as part of an ongoing managed process, this can
weaken the overall security posture.
Data and Media Sanitization 41
By its nature, cloud-based environments are typically hosting multiple types, structures,
and components of data among various resources, components, and services for users to
access. In the event that you wish to leave or migrate from one cloud provider to another,
this may be possible with little hassle, while other entities have experienced signicant
challenges in removing and exporting their large amounts of structured data from one pro-
vider to another. This is where “vendor lock-in” and interoperability elements come to the
fore. Data and media sanitization also needs to be considered by the CCSP. The ability to
safely remove all data from a system or media, rendering it inaccessible, is critical to ensur-
ing condentiality and to managing a secure lifecycle for data in the cloud.
Vendor Lock-In
Vendor lock-in highlights where a customer may be unable to leave, migrate, or transfer
to an alternate provider due to technical or non-technical constraints. Typically, this
could be based on the technology, platforms, or system design that may be proprietary
or due to a dispute between the provider and the customer. Vendor lock-in poses a very
real risk for an organization that may not be in a position to leave the current provider
or indeed continue with business operations and services. Vendor lock-in is also covered
later on in this book.
Additionally, where a specic proprietary service or structure has been used to store
your vast amounts of information, this may not support the intelligent export into a
structured format. For example, how many organizations would be pleased with 100,000
records being exported into a at-based text le? Open APIs are being strongly champi-
oned as a mechanism to reduce this challenge.
Aside from the hassle and general issues associated with reconstructing and format-
ting large datasets into a format that could be imported and integrated into a new cloud
service or cloud service provider, the challenge related to secure deletion or the sanitiza-
tion of digital media remains a largely unsolved issue among cloud providers and cloud
customers alike.
Most organizations have failed to assess or factor in this challenge in the absence of a
cloud computing strategy, and ultimately many have not put highly sensitive or regulated
data in cloud-based environments as yet. This is likely to change with the shift toward
“compliant clouds” and cloud-based environments aligned with certication standards
such as ISO 27001/2, SOC II, and PCI DSS among other international frameworks.
In the absence of degaussing, which is not a practical or realistic option for cloud
environments, the approach for rendering data unreadable should be the rst option
taken (assuming the physical destruction of storage areas is not feasible). Adopting a
DOMAIN 1 Architectural Concepts and Design Requirements Domain42
security mindset, if we can restrict the availability, integrity, and condentiality of the
data, we can then make the information unreadable and will act as the next best method
to secure deletion. How might this be achieved in cloud-based environments?
Cryptographic Erasure
A fairly reliable way to sanitize a device is to erase and/or overwrite the data it contains. With
the recent developments in storage devices, most now contain built-in sanitize commands
that enable users and custodians to sanitize media in a simple and convenient format. While
these commands are mostly effective when implemented and initiated correctly, like all
technological commands, it is essential to verify their effectiveness and accuracy.
Where possible (this may not apply to all cloud-based environments), erase each
block, overwrite all with a known pattern, and erase them again.
When done correctly, a complete erasure of the storage media will eliminate risks
related to key recovery (where stored locally—yes, this is a common mistake), side-channel
attacks on controller to recover information about the destroyed key, and future attacks
on the cryptosystem.
Note that key destruction on its own is not a comprehensive approach, as the key may
be recovered using forensic techniques.
Data Overwriting
While not inherently secure or making the data irretrievable, overwriting data multiple
times can make the task of retrieval far more complex, challenging, and time-consuming.
This technique may not be sufcient if you are hosting highly sensitive, condential, or
regulated information within cloud deployments.
When deleting les and data, they will become “invisible” to the user; however, the
space that they inhabit in the storage media is made available for other information and
data to be written to by the system and storage components as part of normal usage of the
storage media. The challenge and risk with this is that forensic investigators and relevant
toolsets can retrieve this information in a matter of minutes, hours, or days.
Where possible, overwriting data multiple times will help extend the time and efforts
required to retrieve the relevant information and may make the storage components or par-
titions “unattractive” to potential attackers or those focused on retrieving the information.
Warning Given enough time, effort, and resources in the absence of degaussing media,
these approaches may not be sufficient to evade a determined attacker or reviewer from
retrieving relevant information. What it may do is to dissuade or make the task too challenging
for a novice, intermediate, or opportunist attacker who could decide to target easier locations
or storage mediums.
Virtualization Security 43
Virtualization technologies enable cloud computing to become a real and scalable
service offering due to the savings, sharing, and allocations of resources across multiple
tenants and environments. As with all enabling technologies, the specied deployment
and manner in which the solution is deployed may allow attackers to target relevant com-
ponents and functions with the view to obtaining unauthorized access to data, systems,
and resources.
In the world of cloud computing, virtualization represents one of the key targets for
the attackers. Specically, while virtualization may introduce technical vulnerabilities
based on the solution, the single most critical component to enable the technology to
function in the manner for which it was developed, along with enforcing the relevant
technical and non-technical security controls, is the hypervisor.
The Hypervisor
The role of the hypervisor is a simple one—to allow multiple operating systems (OS) to
share a single hardware host (with each OS appearing to have the host’s processor, memory,
and resources to itself).
Think of a management console, and effectively this is what the hypervisor does—
intelligently controlling the host processor and resources, prioritizing and allocating what
is needed to each operating system, while ensuring there are no crashes and the neigh-
bors do not upset each other.
Now we will go a little deeper with the goal of discussing the security elements associ-
ated with virtual machines.
Type I Hypervisor: There are many differing accounts, denitions, and versions of
what the distinction between Type I and Type II hypervisors are (and are not), but
with the view to keeping it simple, we will refer to Type I hypervisors as a hypervisor
running directly on the hardware with VM resources provided by the hypervisor.
These are also referred to as “bare metal” hypervisors. Examples of these include
VMware ESXI and Citrix XenServer.
Type II Hypervisor: Type II hypervisors run on a host operating system to provide
virtualization services. Examples of Type II are VMware Workstation and Micro-
soft Virtual PC.
In summary, Type I = Hardware, Type II = Operating System.
DOMAIN 1 Architectural Concepts and Design Requirements Domain44
Security Types
From a security perspective, we look to see which of the hypervisors will provide a more
robust security posture and which will be more targeted by attackers.
Type II Security: Based on Type II hypervisor being operating system–based,
this makes them more attractive to attackers, given that there are far more
vulnerabilities associated with the OS as well as other applications that reside
within the OS layer.
A lack of standardization on the OS and other layers could also open up addi-
tional opportunities and exposures that could make the hypervisor susceptible to
attack and compromise.
Type I Security: Type I hypervisors signicantly reduce the attack-surface over
Type II hypervisors. Type I hypervisor vendors also control relevant software that
comprise and form the hypervisor package, including the virtualization functions
and OS functions, such as devices drivers and I/O stacks.
With the vendors having control over the relevant packages, this enables them to
reduce the likelihood of malicious software from being introduced into the hyper-
visor foundation and introducing or exposing the hypervisor layer.
The limited access and strong control over the embedded OS greatly increase the
reliability and robustness of Type I hypervisors.
Where technology, hardware, and software standardization can be used effectively,
this can signicantly reduce the risk landscape and increase the security posture.
Threats form a real and ever-evolving challenge for organizations to counteract and
defend against. Whether they are cloud specic or general disruptions to business and
technology, threats can cause signicant issues, outages, poor performance, and cata-
strophic impacts should they materialize.
Many of the top risks identied in the “CLOUD SECURITY ALLIANCE the Noto-
rious Nine: Cloud Computing Top Threats in 2013” research paper, as noted here,
remain a challenge for non-cloud-based environments and organizations alike. What this
illustrates is the consistent challenges faced by entities today, altered and amplied by
different technology deployments, such as cloud computing.17
Common Threats 45
Data Breaches
Not new to the security practitioner and company leaders, this age-old challenge contin-
ues to dominate headlines and new stories around the world. Whether it is a lost laptop
that is unencrypted or side channel timing attacks on virtual machines, what cloud com-
puting has done is widen the scope and coverage for data breaches.
Given the nature of cloud deployments and multi-tenancy, virtual machines, shared
databases, application design, integration, APIs, cryptography deployments, key man-
agement, and multiple locations of data all combine to provide a highly amplied and
dispersed attack surface, leading to greater opportunity for data breaches.
Given the rise of smart devices, tablets, increased workforce mobility, BYOD, and
other factors, such as the historical challenge of lost devices, compromised systems, tra-
ditional forms of attacks, coupled with the previously listed factors related to the cloud,
Cloud Security Professionals can expect to be facing far more data breaches and loss of
organizational and personal information as the adoption of the cloud and further use of
mobile devices continue to increase.
Note that depending on the data and information classication types, any data
breaches or suspected breaches of systems security controls may require mandatory
breach reporting to relevant agencies, entities, or bodies, for example, healthcare infor-
mation (HIPAA), personal Information (European Data Protection), credit card informa-
tion (PCI DSS). Signicant nes may be imposed on organizations that cannot illustrate
that sufcient duty of care or security controls were implemented to prevent such data
breaches. These vary greatly depending on the industry, sector, geographic location, and
nature of the information.
Data Loss
Not to be confused with data breaches, data loss refers to the loss of information, dele-
tion, overwriting, corruption, or loss of integrity related to the information stored, pro-
cessed, or transmitted within cloud environments.
Data loss within cloud environments can present a signicant threat and challenge to
organizations. The reasons for this can include
Does the provider/customer have responsibility for data backup?
In the event that backup media containing the data is obtained, does this include
all data or only a portion of the information?
Where data has become corrupt, or overwritten, can an import or restore be
Where accidental data deletion has occurred from the customer side, will the
provider facilitate the restoration of systems and information in multi-tenancy
environments or on shared platforms?
DOMAIN 1 Architectural Concepts and Design Requirements Domain46
Note that when the customer uploads encrypted information to the cloud environ-
ment, the encryption keys become a critical component to ensure data is not lost and
remains available. The loss of the relevant encryption keys constitutes data loss, as the
information will no longer be available for use in the absence of the keys.
Security can from time to time come back to haunt us if it is not owned, operated,
and maintained effectively and efciently.
Account or Service Traffic Hijacking
This is not a cloud-specic threat, but one that has been a constant thorn and challenge
for relevant security professionals to combat through the years. Account and service
trafc hijacking has long been targeted by attackers, using methods such as phishing,
more recently smishing (SMS phishing), spear phishing (targeted phishing attacks), and
exploitation of software and other application-related vulnerabilities.
The key component of these attack methods, when successful, allows for the attackers
to monitor and eavesdrop on communications, sniff and track trafc, capture relevant
credentials, through to accessing and altering account and user prole characteristics
(changing passwords, etc.).
Of late, attackers areutilizing compromised systems, accounts, and domains as a
“smokescreen” to launch attacks against other organizations and entities, making the
source of the attack appear to be from suppliers, third parties, competitors, or other legiti-
mate organizations that have no knowledge or awareness of having been compromised.
Insecure Interfaces and APIs
In order for users to access cloud computing assets and resources, they utilize the APIs
made available by the cloud provider. Key functions of the APIs, including the provision-
ing, management, and monitoring, are all performed utilizing the provider interfaces.
In order for the security controls and availability of resources to function in the way that
they were designed, use of the provider APIs is required to prevent against deliberate and
accidental attempts to circumvent policies and controls.
Sounds simple enough, right? In an ideal world, that may be true, but for the modern
and evolving cloud landscape, that challenge is amplied with relevant third parties,
organizations, and customers (depending on deployment) building additional interfaces
and “bolt on” components to the API, which signicantly increase the complexity, result-
ing in a multi-layered API. This can result in credentials being passed to third parties or
consumed insecurely across the API and relevant stack components.
Note that most providers make concerted efforts to ensure the security of their inter-
faces and APIs; however, any variations or additional components added on from the con-
sumer or other providers may reduce the overall security posture and stance.
Common Threats 47
Denial of Service
By their nature, denial-of-service (DoS) attacks prevent users from accessing services and
resources from a specied system or location. This can be done using any number of
attack vectors available but typically look to target buffers, memory, network bandwidth,
or processor power.
With cloud services relying ultimately on availability to service and enable connectiv-
ity to resources from customers, when denial-of-service attacks are targeted at cloud envi-
ronments, they can create signicant challenges for the provider and customer alike.
Distributed denial-of-service (DDoS) attacks are launched from multiple locations
against a single target. Work with the Cloud Security Architect to ensure that system
design and implementation does not create a Single Point of Failure (SPOF) that can
expose an entire system to failure if a DoS or DDoS attack is successfully launched
against a system.
Note that while widely touted by the media and feared by organizations worldwide,
many believe that denial-of-service attacks require large volumes of trafc in order to be
successful. This is not always the case, as asymmetric application level payload attacks
having measured success with as little at 100–150Kbps packets.
Malicious Insiders
When looking to secure the key assets of any organization, three primary components
are essential—people, processes, and technology. People tend to present the single larg-
est challenge to security due to the possibility of a disgruntled, rogue, or simply careless
employee or contractor exposing sensitive data either by accident or on purpose.
According to CERT, “A malicious insider threat to an organization is a current or
former employee, contractor, or other business partner who has or had authorized access
to an organization’s network, system, or data and intentionally exceeded or misused that
access in a manner that negatively affected the condentiality, integrity, or availability of
the organization’s information or information systems.18
Abuse of Cloud Services
Think of the ability to have previously unobtainable and unaffordable computing
resources available for a couple of dollars an hour. Well, that is exactly what cloud com-
puting provides—an opportunity for businesses to have almost unlimited scalability and
exibility. The challenge for many organizations is that this scalability and exibility is
provided across the same platforms or resources that attackers will be able to access and
use to execute dictionary attacks, execute denial-of-service attacks, crack encryption pass-
words, or host illegal software and materials for widespread distribution. Note that the
power of the cloud is not always used in the manner for which it is offered to users.
DOMAIN 1 Architectural Concepts and Design Requirements Domain48
Insufficient Due Diligence
Cloud computing has created a revolution among many users and companies with
regard to how they utilize technology-based solutions and architectures. As with many
such technology changes and revolutions, some have acted before giving the appropriate
thought and due care to what a secure architecture would look like and what would be
required to implement one.
Cloud computing has, for many organizations, become that “rash” decision—
intentionally or unintentionally. The change in roles, focus, governance, auditing,
reporting, strategy, and other operational elements requires a considerable investment
on the part of the business in a thorough risk-review process, as well as amendments to
business processes.
Given the immaturity of the cloud computing market, many entities and providers
are still altering and rening the way they operate. There will be acquisitions, changes,
amendments, and revisions in the way in which entities offer services, which can impact
both customers and partners.
Finally, when the dust settles in the race for “cloud space,” pricing may vary signi-
cantly, rates and offerings may be reduced or inated, and cyber-attacks could force
customers to review and revise their selection of a cloud provider. Should your provider
go bankrupt, are you in a position to change cloud providers in a timely and seamless
It is incumbent upon the Cloud Security Professional to ensure that both due care
and due diligence are being exercised in the drive to the cloud.
Due diligence is the act of investigating and understanding the risks a
company faces.
Due care is the development and implementation of policies and procedures to
aid in protecting the company, its assets, and its people from threats.
Note that cloud companies may merge, be acquired, go bust, change services, and
ultimately change their pricing model—those that failed to carry out the appropriate due
diligence activities may in fact be left with nowhere to go or turn to, unless they introduce
compensating controls to offset such risks (potentially resulting in less nancial benet).
Shared Technology Vulnerabilities
For cloud service providers to effectively and efciently deliver their services in a scalable
way, they share infrastructure, platforms, and applications among tenants and potentially
with other providers. This can include the underlying components of the infrastructure,
resulting in shared threats and vulnerabilities.
Where possible, providers should implement a layered approach to securing the
various components, and a defense-in-depth strategy should include compute, storage,
Security Considerations for Different Cloud Categories 49
network, application, and user security enforcement and monitoring. This should be uni-
versal, regardless of whether the service model is IaaS, PaaS, or SaaS.
Security can be a subjective issue, viewed differently across different industries, compa-
nies, and users, based on their needs, desires, and requirements. Many of these actions
and security appetites are strongly inuenced by compliance and other regulatory
Infrastructure as a Services (IaaS) Security
Within IaaS, a key emphasis and focus must be placed on the various layers and com-
ponents stemming from the architecture through to the virtual components. Given the
reliance and focus placed on the widespread use of virtualization and the associated
hypervisor components, this must be a key focus as an attack vector to gain access to or
disrupt a cloud service.
The hypervisor acts as the abstraction layer that provides the management functions
for required hardware resources among VMs.
Virtual Machine Attacks: Cloud servers contain tens of VMs. These VMs may be
active or ofine and, regardless of state, are susceptible to attacks. Active VMs are
vulnerable to all traditional attacks that can affect physical servers.
Once a VM is compromised, this gives the VMs on the same physical server a
possibility of being able to attack each other, because the VMs share the same
hardware and software resources, for example, memory, device drivers, storage,
and hypervisor software.
Virtual Network: The virtual network contains the virtual switch software that
controls multiplexing trafc between the virtual NICs of the installed VMs and
the physical NICs of the host.
Hypervisor Attacks: Hackers consider the hypervisor a potential target because
of the greater control afforded by lower layers in the system. Compromising the
hypervisor enables gaining control over the installed VMs, the physical system,
and the hosted applications.
Typical and common attacks include HyperJacking (installing a rogue hypervisor
that can take complete control of a server) such as SubVir, BLUEPILL (hypervisor
rootkit using AMD SVM), Vitriol (hypervisor rootkit using Intel VT-x), and DKSM.
DOMAIN 1 Architectural Concepts and Design Requirements Domain50
Another common attack is the VM Escape, which is done by crashing the guest
OS to get out of it and running an arbitrary code on the host OS. This allows
malicious VMs to take complete control of the host OS.
VM-Based Rootkits (VMBRs): These rootkits act by inserting a malicious hyper-
visor on the y or modifying the installed hypervisor to gain control over the host
workload. In some hypervisors such as Xen, the hypervisor is not alone in adminis-
tering the VMs.
A special privileged VM serves as an administrative interface to Xen and controls
the other VMs.
Virtual Switch Attacks: The virtual switch is vulnerable to a wide range of layer II
attacks such as a physical switch. These attacks include virtual switch congura-
tions, VLANs and trust zones, and ARP tables.
Denial-of-Service (DoS) Attacks: Denial-of-service attacks in a virtual environ-
ment form a critical threat to VMs, along with all other dependent and associated
Note that not all DOS attacks are from external attackers.
These attacks can be the direct result of miscongurations at the hypervisor,
which allows a single VM instance to consume and utilize all available resources.
In the same manner as a DOS attack renders resources unavailable to users
attempting to access them, miscongurations at the hypervisor will restrict any
other VM running on the same physical machine. This prevents network hosts
from functioning appropriately due to the resources being consumed and utilized
by a single device.
Note that hypervisors prevent any VM from gaining 100% usage of any shared
hardware resources, including CPU, RAM, network bandwidth, and other mem-
ory. Appropriately congured hypervisors detect instances of resource “hogging”
and take appropriate actions, such as restarting the VM in an effort to stabilize or
halt any processes that may be causing the abuse.
Co-Location: Multiple VMs residing on a single server and sharing the same
resources increase the attack surface and the risk of VM-to-VM or VM-to-hypervisor
compromise. On the other hand, when a physical server is off, it is safe from attacks.
However, when a VM becomes ofine, it is still available as VM image les that are
susceptible to malware infections and patching.
Provisioning tools and VM templates are exposed to different attacks that attempt to
create new unauthorized VMs or patch the VM templates. This will infect the other VMs
that will be cloned from this template.
Security Considerations for Different Cloud Categories 51
These new categories of security threats are a result of the new, complex, and
dynamic nature of the cloud virtual infrastructure, as follows:
Multi-Tenancy: Different users within a cloud share the same applications and
the physical hardware to run their VMs. This sharing can enable information
leakage exploitation and increase the attack surface and the risk of VM-to-VM or
VM-to-hypervisor compromise.
Workload Complexity: Server aggregation duplicates the amount of workload
and network trafc that runs inside the cloud physical servers, which increases the
complexity of managing the cloud workload.
Loss of Control: Users are not aware of the location of their data and services, and
the cloud providers run VMs and are not aware of their contents.
Network Topology: The cloud architecture is very dynamic, and the existing
workload changes over time because of creating and removing VMs. In addition,
the mobile nature of the VMs that allows VMs to migrate from one server to
another leads to non-predened network topology.
Logical Network Segmentation: Within IaaS, the requirement for isolation
alongside the hypervisor remains a key and fundamental activity to reduce exter-
nal snifng, monitoring, and interception of communications and others within
the relevant segments.
When assessing relevant security congurations and connectivity models, VLANs,
NATs, bridging, and segregation provide viable options to ensure the overall
security posture remains strong, along with increased exibility and performance
being constant, as opposed to other mitigation controls that may impact the over-
all performance.
No Physical Endpoints: Due to server and network virtualization, the number of
physical endpoints (e.g., switches, servers, NICs) is reduced. These physical end-
points are traditionally used in dening, managing, and protecting IT assets.
Single Point of Access: Virtualized servers have a limited number of access points
(NICs) available to all VMs.
This represents a critical security vulnerability where compromising these access
points opens the door to compromise the VCI, including VMs, hypervisor, or the
virtual switch.
The Cloud Security Alliance Common Controls Matrix (CCM) provides a good “go-
to guide” for specic risks for SaaS, PaaS, and IaaS.19
DOMAIN 1 Architectural Concepts and Design Requirements Domain52
Platform as a Service (PaaS) Security
PaaS security involves four main areas, each of which is discussed in the following sections.
System/Resource Isolation
PaaS tenants should not have shell access to the servers running their instances (even
when virtualized). The rationale behind this is to limit the chance and likelihood of con-
guration or system changes impacting multiple tenants. Where possible, administration
facilities should be restricted to siloed containers to reduce this risk.
Careful consideration should be given before access is provided to the underlying
infrastructure hosting a PaaS instance. In enterprises, this may have less to do with
malicious behavior and more to do with efcient cost control; it takes time and effort to
“undo” tenant-related “xes” to their environments.
User-Level Permissions
Each instance of a service should have its own notion of user-level entitlements (per-
missions). In the event that the instances share common policies, appropriate counter-
measures and controls should be enabled by the Cloud Security Professional to reduce
authorization creep or the inheritance of permissions over time.
However, it is not all a challenge, as the effective implementation of distinct and
common permissions can yield signicant benets when implemented across multiple
applications within the cloud environment.
User Access Management
User access management enables users to access IT services, resources, data, and other
assets. Access management helps to protect the condentiality, integrity, and availability
of these assets and resources, ensuring that only those authorized to use or access these
are permitted access.
In recent years, traditional “standalone” access controls methods have become less
utilized, with more holistic approaches to unify the authentication of users becoming
favored (this includes single sign-on). In order for user access management processes and
controls to function effectively, a key emphasis is placed on the agreement, implementa-
tion of the rules, and organizational policies for access to data and assets.
The key components of user access management include but aren’t limited to the
Intelligence: The business intelligence for UAM requires the collection, analysis,
auditing, and reporting against rule-based criteria, typically based on organiza-
tional policies.
Security Considerations for Different Cloud Categories 53
Administration: The ability to perform onboarding or changing account access
on systems and applications.
These solutions or toolsets should enable automation of tasks that were typically
or historically performed by personnel within the operations or security function.
Authentication: Provides assurance and verication in real-time as to the user
being who they claim to be, accompanied by relevant credentials (such as
Authorization: Determines the level of access to grant each user based on pol-
icies, roles, rules, and attributes. The principle of least privilege should always
be applied (i.e., only what is specically required to fulll their job functions).
Note that User Access Management enables organizations to avail benets across the
areas of security, operational efciencies, user administration, auditing, and reporting
along with other onboarding components; however, it can be difcult to implement for
historical components or environments.
Protection Against Malware/Backdoors/Trojans
Traditionally, development and other teams create backdoors to enable administrative
tasks to be performed.
The challenge with these is that once backdoors are created, they provide a constant
vector for attackers to target and potentially gain access to the relevant PaaS resources.
We have all heard of the story where attackers gained access through a backdoor, only to
create additional backdoors, while removing the “legitimate” backdoors, essentially hold-
ing the systems, resources, and associated services “hostage.
More recently, embedded and hardcoded malware has been utilized by attackers as
a method of obtaining unauthorized access and retaining this access for a prolonged and
extended period. Most notably, malware was placed in point-of-sale devices, handheld
card-processing devices, and other platforms, thereby divulging large amounts of sensitive
data (including credit card numbers, customer details, and so on).
As with SaaS, web application and development reviews should go hand in hand.
Code reviews and other SDLC checks are essential to ensure that the likelihood of mal-
ware, backdoors, Trojans, and other potentially harmful vectors are reduced signicantly.
Software as a Service (SaaS) Security
SaaS security involves three main areas, each of which is discussed in the following sections.
DOMAIN 1 Architectural Concepts and Design Requirements Domain54
Data Segregation
Multi-tenancy is one of the major characteristics of cloud computing. As a result of
multi-tenancy, multiple users can store their data using the applications provided by
SaaS. Within these architectures, the data of various users will reside at the same location
or across multiple locations and sites. With the appropriate permissions or using attack
methods, the data of customers may become visible or possible to access.
Typically, in SaaS environments, this can be achieved by exploiting code vulnera-
bilities or via injection of code within the SaaS application. If the application executes
this code without verication, then there is a high potential of success for the attacker to
access or view other customers’/tenants’ data.
A SaaS model should therefore ensure a clear segregation for each user’s data. The
segregation must be ensured not only at the physical level but also at the application
level. The service should be intelligent enough to segregate the data from different users.
A malicious user can use application vulnerabilities to hand-craft parameters that bypass
security checks and access sensitive data of other tenants.
Data Access and Policies
When allowing and reviewing access to customer data, the key aspect to structuring a
measurable and scalable approach begins with the correct identication, customization,
implementation, and repeated assessments of the security policies for accessing data.
The challenge associated with this is to map existing security policies, processes, and
standards to meet and match the policies enforced by the cloud provider. This may mean
revising existing internal policies or adopting new practices where users can only access
data and resources relevant to their job function and role.
The cloud must adhere to these security policies to avoid intrusion or unauthorized
users viewing or accessing data.
The challenge from a cloud provider perspective is to offer a solution and service that
is exible enough to incorporate the specic organizational policies put forward by the
organization, while also being positioned to provide a boundary and segregation among
the multiple organizations and customers within a single cloud environment.
Web Application Security
Due to the fact that SaaS resources are required to be “always on” and availability dis-
ruptions kept to a minimum, security vulnerabilities within the web application(s) carry
signicant risk and potential impact for the enterprise. Vulnerabilities, no matter what
risk categorization, present challenges for cloud providers and customers alike. Given the
large volume of shared and co-located tenants within SaaS environments, in the event
Open Web Application Security Project (OWASP) Top Ten Security Threats 55
that a vulnerability is exploited, catastrophic consequences may be experienced by the
cloud customer as well as by the service provider.
As with traditional web application technologies, cloud services rely on a robust,
hardened, and regularly assessed web application to deliver services to its users. The fun-
damental difference with cloud-based services versus traditional web applications is their
footprint and the attack surface that they will present.
In the same way that web application security assessments and code reviews are per-
formed on applications prior to release, this becomes even more crucial when dealing
with cloud services. The failure to carry out web application security assessments and
code reviews may result in unauthorized access, corruption, or other integrity issues
affecting the data, along with a loss of availability.
Finally, web applications introduce new and specic security risks that may not be
counteracted or defended against by traditional network security solutions (rewalls, IDS/
IPS, etc.), as the nature and manner in which web application vulnerabilities and exploits
operate may not be identied or may appear legitimate to the network security devices
designed for non-cloud architectures.
The Open Web Application Security Project (OWASP) has provided the ten most critical
web applications security threats that should serve as a minimum level for application
security assessments and testing.
The OWASP Top Ten covers the following categories:
“A1—Injection: Injection aws, such as SQL, OS, and LDAP injection occur
when untrusted data is sent to an interpreter as part of a command or query. The
attacker’s hostile data can trick the interpreter into executing unintended com-
mands or accessing data without proper authorization.
A2—Broken Authentication and Session Management: Application functions
related to authentication and session management are often not implemented
correctly, allowing attackers to compromise passwords, keys, or session tokens, or
to exploit other implementation aws to assume other users’ identities.
A3—Cross-Site Scripting (XSS): XSS aws occur whenever an application takes
untrusted data and sends it to a web browser without proper validation or escap-
ing. XSS allows attackers to execute scripts in the victim’s browser, which can
hijack user sessions, deface websites, or redirect the user to malicious sites.
DOMAIN 1 Architectural Concepts and Design Requirements Domain56
A4—Insecure Direct Object References: A direct object reference occurs when
a developer exposes a reference to an internal implementation object, such as a
le, directory, or database key. Without an access control check or other protec-
tion, attackers can manipulate these references to access unauthorized data.
A5—Security Misconguration: Good security requires having a secure cong-
uration dened and deployed for the application, frameworks, application server,
web server, database server, and platform. Secure settings should be dened,
implemented, and maintained, as defaults are often insecure. Additionally, soft-
ware should be kept up to date.
A6—Sensitive Data Exposure: Many web applications do not properly protect
sensitive data, such as credit cards, tax IDs, and authentication credentials. Attack-
ers may steal or modify such weakly protected data to conduct credit card fraud,
identity theft, or other crimes. Sensitive data deserves extra protection such as
encryption at rest or in transit, as well as special precautions when exchanged with
the browser.
A7—Missing Function Level Access Control: Most web applications verify
function-level access rights before making that functionality visible in the UI.
However, applications need to perform the same access control checks on the
server when each function is accessed. If requests are not veried, attackers
will be able to forge requests in order to access functionality without proper
A8—Cross-Site Request Forgery (CSRF): A CSRF attack forces a logged-on vic-
tim’s browser to send a forged HTTP request, including the victim’s session cookie
and any other automatically included authentication information, to a vulnerable
web application. This allows the attacker to force the victim’s browser to generate
requests the vulnerable application thinks are legitimate requests from the victim.
A9—Using Components with Known Vulnerabilities: Components, such as
libraries, frameworks, and other software modules, almost always run with full
privileges. If a vulnerable component is exploited, such an attack can facilitate
serious data loss or server takeover. Applications using components with known
vulnerabilities may undermine application defences and enable a range of possi-
ble attacks and impacts.
A10—Unvalidated Redirects and Forwards: Web applications frequently redi-
rect and forward users to other pages and websites, and use untrusted data to
determine the destination pages. Without proper validation, attackers can redi-
rect victims to phishing or malware sites, or use forwards to access unauthorized
Cloud Secure Data Lifecycle 57
Data is the single most valuable asset for most organizations, and depending on the value
of the information to their operations, security controls should be applied accordingly.
As with systems and other organizational assets, data should have a dened and
managed lifecycle across the following key stages (Figure1.5):
Create: New digital content is generated or existing content is modied.
Store: Data is committed to a storage repository, which typically occurs directly
after creation.
Use: Data is viewed, processed, or otherwise used in some sort of activity (not
including modication).
Share: Information is made accessible to others—users, partners, customers,
and so on.
Archive: Data leaves active use and enters long-term storage.
Destroy: Data is permanently destroyed using physical or digital means.
FigUre1.5 Key stages of the data lifecycle
The lifecycle is not a single linear operation but a series of smaller lifecycles running in
different environments. At all times, it is important to be aware of the logical and physical
location of the data in order to satisfy audit, compliance, and other control requirements.
In addition to the location of the data, it is also very important to know who is accessing
data and how they are accessing it.
note Different devices will have specific security characteristics and/or limitations (BYOD, etc.).
DOMAIN 1 Architectural Concepts and Design Requirements Domain58
Table1.2 lists a sample of information/data governance types. Note that this may vary
depending on your organization, geographic location, risk appetite, and so on.
taBLe1.2 Information/Data Governance Types
Information Classification High-level description of valuable information
categories (e.g., highly confidential, regulated).
Information Management Policies What activities are allowed for different
information types?
Location and Jurisdictional Policies Where can data be geographically located?
What are the legal and regulatory implications or
Authorizations Who is allowed to access different types of
Custodianship Who is responsible for managing the information
at the bequest of the owner?
Business continuity management is the process where risks and threats to the ongoing
availability of services, business functions, and the organization are actively reviewed and
managed at set intervals as part of the overall risk-management process. The goal is to
keep the business operating and functioning in the event of a disruption.
Disaster recovery planning is the process where suitable plans and measures are taken
to ensure that in the event of a disaster (ood, storm, tornado, etc.) the business can
respond appropriately with the view to recovering critical and essential operations (even
somewhat limited) to a state of partial or full level of service in as little time as possible.
The goal is to quickly establish, re-establish, or recover affected areas or elements of the
business following a disaster.
Note that disaster recovery and business continuity are often confused or used inter-
changeably in some organizations. Wherever possible, be sure to use the correct termi-
nology and highlight the differences between them.
Business Continuity/Disaster Recovery Planning 59
Business Continuity Elements
From the perspective of the cloud customer, business continuity elements include the
relevant security pillars of availability, integrity, and condentiality.
The availability of the relevant resources and services is often the key requirement,
along with the uptime and ability to access these on demand. Failure to ensure this
results in signicant impacts, including loss of earnings, loss of opportunities, and loss of
condence for the customer and provider.
Many security professionals struggle to keep their business continuity processes
current once they have started to utilize cloud-based services. Equally, many fail to
adequately update, amend, and keep their business continuity plans up to date in terms
of complete coverage of services. This may be due to a number of factors; however, the
key component contributing to this is that business continuity is operated mainly at set
intervals and is not integrated fully into ongoing business operations. That is, business
continuity activities are performed only annually or bi-annually, which may not take into
account any notable changes in business operations (such as the cloud) within relevant
business units, sections, or systems.
Note that not all assets or services are equal! What are the key or fundamental compo-
nents required to ensure the business or service can continue to be delivered? The answer
to this question should shape and structure your business continuity and disaster recovery
Critical Success Factors
Two critical success factors for business continuity when utilizing cloud-based services
are as follows:
Understanding your responsibilities versus the cloud provider’s responsibilities.
Customer responsibilities.
Cloud provider responsibilities.
Understand any interdependencies/third parties (supply chain risks).
Order of restoration (priority)—who/what gets priority?
Appropriate frameworks/certications held by the facility, services, and
Right to audit/make regular assessments of continuity capabilities.
Communications of any issues/limited services.
Is there a need for backups to be held on-site/off-site or with another cloud
DOMAIN 1 Architectural Concepts and Design Requirements Domain60
Clearly state and ensure the SLA addresses which components of business conti-
nuity/disaster recovery are covered and to what degree they are covered.
Penalties/compensation for loss of service.
Recovery Time Objectives (RTO)/Recovery Point Objectives (RPO).
Loss of integrity or condentiality (are these both covered?)
Points of contact and escalation processes.
Where failover to ensure continuity is utilized, does this maintain compliance
and ensure the same or greater level of security controls?
When changes are made that could impact the availability of services, that
these are communicated in a timely manner.
Data ownership, data custodians, and data processing responsibilities are
clearly dened within the SLA.
Where third parties and key supply chain are required to ensure that availabil-
ity of services is maintained, that the equivalent or greater levels of security are
met, as per the agreed-upon SLA between the customer and provider.
The cloud customer should be in agreement with and fully satised with all of the
details relating to business continuity and disaster recovery (including recovery times,
responsibilities, etc.) prior to signing any documentation or agreements that signify
acceptance of the terms for system operation.
Where the customer is requesting amendments or changes to the relevant SLA, time
and costs associated with these changes are typically to be paid for by the customer.
Important SLA Components
Finally, regarding disaster recovery, a similar approach should be taken by the cloud
customer to ensure the following are fully understood and acted upon, prior to signing
relevant SLAs and contracts:
Undocumented single points of failure should not exist
Migration to alternate provider(s) should be possible within agreed-upon
Whether all components will be supported by alternate cloud providers in the
event of a failover or on-site/on-premise services would be required
Automated controls should be enabled to allow customers to verify data integrity
Where data backups are included, incremental backups should allow the user to
select the desired settings, including desired coverage, frequency, and ease of use
for recovery point restoration options
Cost-Benefit Analysis 61
Regular assessment of the SLA and any changes that may impact the customer’s
ability to utilize cloud computing components for disaster recovery should be cap-
tured at regular and set intervals.
While we are not able to plan for every single event or disaster that may occur, rele-
vant plans and continuity measure should cover a number of “logical groupings,” which
could be applied in the event of unforeseen or unplanned incidents.
Finally, as cloud adoption and migration continue to expand, all affected or associ-
ated areas of business (technology and otherwise) should be reviewed under business
continuity and disaster recovery plans, thus ensuring that any changes for the customer or
provider are captured and acted upon. Imagine the challenges of trying to restore or act
upon a loss of availability, when processes, controls, or technologies have changed with-
out the relevant plans having been updated or amended to reect such changes.
Cost is often identied as a key driver for the adoption of cloud computing. The chal-
lenge with decisions being made solely or exclusively on cost savings can come back to
haunt the organization or entity that failed to take a risk-based view and factor in the rele-
vant impacts that may materialize.
Resource pooling: Resource sharing is essential to the attainment of signicant
cost savings when adopting a cloud computing strategy. This is usually also cou-
pled with pooled resources being used by different consumer groups at different
Shift from CapEx to OpEx: The shift from capital expenditure (CapEx) to opera-
tional expenditure (OpEx) is seen as a key factor for many organizations—as their
requirement to make signicant purchases of systems and resources is minimized.
Given the constant evolution of technology and computing power, memory, capa-
bilities, and functionality, many traditional systems purchased lose value almost
Factor in time and efciencies: Given that organizations rarely acquire used
technology or servers, almost all purchases are of new and recently developed
technology. But we are not just looking at the technology investment savings—
what about time and efciencies achieved by this? Simply put, these can be the
greatest savings achieved when utilizing cloud computing.
Include depreciation: As with purchasing new cars or newer models of cars, the
value deteriorates the moment the car is driven off the showroom oor. The same
DOMAIN 1 Architectural Concepts and Design Requirements Domain62
applies for IT, only with newer and more desirable cars/technologies and models
being released every few months or years. Using this analogy clearly highlights why so
many organizations are now opting to lease cloud services, as opposed to constantly
investing in technologies that become outdated in relatively short time periods.
Reduction in maintenance and conguration time: Remember all of those days,
weeks, months, and years spent maintaining, operating, patching, updating, sup-
porting, engineering, rebuilding, and generally making sure everything needed
was done to the systems and applications required by the business users? Well,
given that a large portion of those duties (if not all—depending on which cloud
service you are using) are now handled by the cloud provider, the ability to free
up, utilize, and re-allocate resources to other technology or related tasks could
prove to be invaluable.
Shift in focus: Technology and business personnel being able to focus on the key
elements of their role, instead of the daily “reghting” and responding to issues
and technology components, will come as a very welcome change to those profes-
sionals serious about their functions.
Utilities costs: Outside of the technology and operational elements, from a util-
ities cost perspective, massive savings can be achieved with the reduced require-
ment for power, cooling, support agreements, datacenter space, racks, cabinets,
and so on. Large organizations that have migrated large portions of the datacenter
components to cloud-based environments have reported tens of thousands to hun-
dreds of thousands in direct savings from the utilities elements. Green IT is very
much at the fore of many global organizations, and cloud computing plays toward
that focus in a strong way.
Software and licensing costs: Software and relevant licensing costs present a
major cost saving as well, as you only pay for the licensing used versus the bulk or
enterprise licensing levels of traditional non-cloud-based infrastructure models.
Pay per usage: As outlined by the CapEx versus OpEx elements, cloud comput-
ing gives businesses a new and clear benet—pay per usage. In terms of tradi-
tional IT functions, when systems and infrastructure assets were acquired, these
would be seen as a “necessary or required spend” for the organization; however,
with cloud computing, these can now be monitored, categorized, and billed to
specied functions or departments based on usage. This is a signicant win/driver
for IT departments, as this releases pressure to “reduce spending” and allows for
billing of usage for relevant cost bases directly to those, as opposed to absorbing
the costs themselves as a “business requirement.
Certification Against Criteria 63
So with departments and business units now being able to track costs and usage,
we can easily work out the amount of money spent versus the amount saved in
traditional type computing. Sounds pretty straightforward, right?
Other factors: What about new technologies, new/revised roles, legal fees/costs,
contract/SLA negotiations, additional governance requirements, training required,
cloud provider interactions, and reporting? These all may impact and alter the
“price you see” versus “the price you pay”—otherwise known as the Total Cost of
Ownership (TCO).
Note that many organizations have not factored in such costs to date, and as such
their view of cost savings may be skewed or misguided somewhat.
If it cannot be measured, it cannot be managed!
This is a statement that any auditor and security professional should abide by regard-
less of their focus. How can we have condence, awareness, and assurances that the cor-
rect steps are being taken by ourselves and the cloud provider to ensure that our data is
secured in a manner and way in which we have comfort and peace of mind?
Frameworks and standards hold the key here.
But, why are we still struggling to convince users and entities that cloud computing is
a good option, particularly from a security perspective? The reason is simple—no interna-
tional cloud computing standards or security standards exist.
In the absence of any cloud-specic security standards that are universally accepted
by providers and customers alike, we nd ourselves dealing with a patchwork of security
standards, frameworks, and controls that we are applying to cloud environments. These
include but are not limited to
ISO/IEC 27001
NIST SP 800-53
Payment Card Industry Data Security Standard (PCI DSS)
ISO/IEC 27001:201321
Possibly the most widely known and accepted information security standard, ISO
27001 was originally developed and created by the British Standards Institute, under
the name of BS 7799. The standard was adopted by the International Organization for
DOMAIN 1 Architectural Concepts and Design Requirements Domain64
Standardization (ISO) and re-branded ISO 27001. ISO 27001 is the standard to which
organizations certify, as opposed to ISO 27002, which is the best practice framework to
which many others align.
ISO 27001:2005 consisted of 133 controls across eleven domains of security, focusing
on the protection of information assets in their various forms (digital, paper, etc.). Since
September 2013, ISO 27001 was updated to ISO 27001:2013 and now consists of 35 con-
trol objectives and 114 controls spread over 14 domains.
Domains include:
1. Information Security Policies
2. Organization of Information Security
3. Human Resources Security
4. Asset Management
5. Access Control
6. Cryptographic
7. Physical and Environmental Security
8. Operations Security
9. Communications Security
10. System Acquisition, Development, and Maintenance
11. Supplier Relationship
12. Information Security Incident Management
13. Information Security Business Continuity Management
14. Compliance
By its nature, ISO 27001 is designed to be vendor and technology agnostic (i.e., does
not view them differently), and as such looks for the Information Security Management
System (ISMS) to address the relevant risks and components in a manner that is appropri-
ate and adequate based on the risks.
While ISO 27001 is the most advanced security standard widely used today, it does
not specically look at the risks associated with cloud computing, and as such cannot be
deemed as fully comprehensive when measuring security in cloud-based environments.
As with all standards and frameworks, they assist in the structure and standardization
of security practices; however, they cannot be applied across multiple environments (of
differing natures), deployments, and other components with 100% condence and com-
pleteness, given the variations and specialized elements associated with cloud computing.
Due to its importance overall, ISO 27001 will continue to be used by cloud providers and
required by cloud customers as one of the key security frameworks for cloud environments.
Certification Against Criteria 65
The Statement on Auditing Standards 70 (SAS 70) was replaced by Service Organization
Control (SOC) Type I and Type II reports in 2011 following changes and a more com-
prehensive approach to auditing being demanded by customers and clients alike. For
years, SAS 70 was seen as the de facto standard for datacenter customers to obtain inde-
pendent assurance that their datacenter service provider had effective internal controls in
place for managing the design, implementation, and execution of customer information.
SAS 70 consisted of Type I and Type II audits. The Type I audit was designed to assess
the sufciency of the service provider’s controls as of a particular date, and the Type II
audit was designed to assess the effectiveness of the controls as of a certain date (point-in-
time assessment).
Like many other frameworks, SAS 70 audits focused on verifying that the controls had
been implemented and followed, however, not the standard, completeness, or effective-
ness of the controls implemented. Think of having an alarm but not checking whether it
was effective, functioning, or correctly installed.
SOC reports are performed in accordance with Statement on Standards for Attesta-
tion Engagements (SSAE) 16, Reporting on Controls at a Service Organization.
SOC I reports focus solely on controls at a service provider that are likely to be
relevant to an audit of a subscriber’s nancial statements.
SOC II and SOC III reports address controls of the service provider that relate to
operations and compliance.
There are some key distinctions between SOC I, SOC II, and SOC III:
SoC i SOC I reports can be one of two types:
A Type I report presents the auditors’ opinion regarding the accuracy and com-
pleteness of management’s description of the system or service as well as the suit-
ability of the design of controls as of a specic date.
Type II reports include the Type I criteria and audit the operating effectiveness of
the controls throughout a declared period, generally between 6 months and 1 year.
SoC ii SOC II reporting was specically designed for IT-managed service provid-
ers and cloud computing. The report specically addresses any number of the ve
so-called “Trust Services Principles,” which are
Security (the system is protected against unauthorized access, both physical and
Availability (the system is available for operation and use as committed or agreed)
Processing Integrity (system processing is complete, accurate, timely, and authorized)
DOMAIN 1 Architectural Concepts and Design Requirements Domain66
Condentiality (information designated as condential is protected as committed
or agreed)
Privacy (personal information is collected, used, retained, disclosed, and disposed
of in conformity with the provider’s Privacy Policy)
SoC iii Reporting also uses the Trust Services Principles but provides only the audi-
tor’s report on whether the system achieved the specied principle, without disclosing
relevant details and sensitive information.
A key difference between a SOC II report and a SOC III report is that a SOC II
report is generally restricted in distribution and coverage (due to the information it
contains), with a SOC III report being broadly available, with limited information and
details included within it (often used to instill condence in perspective clients or for
marketing purposes).
To review:
SOC I: Those interested in nancial statements.
SOC II: Information technology personnel will be interested.
SOC III: Used to illustrate conformity, compliance, and security efforts to current
or potential subscribers and customers of cloud services.
NIST SP 800-5323
The National Institute of Standards and Technology (NIST) is an agency of the U.S.
Government that makes measurements and sets standards as needed for industry or gov-
ernment programs. The primary goal and objective of the 800-53 standard is to ensure
that appropriate security requirements and security controls are applied to all U.S. Fed-
eral Government information and information management systems.
It requires that risk be assessed and the determination made if additional controls are
needed to protect organizational operations (including mission, functions, image, or rep-
utation), organizational assets, individuals, other organizations, or the nation.
The 800-53 standard—“Security and Privacy Controls for Federal Information Sys-
tems and Organizations”—underwent its fourth revision in April 2013.
Primary updates and amendments include
Assumptions relating to security control baseline development
Expanded, updated, and streamlined tailoring guidance
Additional assignment and selection statement options for security and privacy
Descriptive names for security and privacy control enhancements
Certification Against Criteria 67
Consolidated security controls and control enhancements by family with baseline
Tables for security controls that support development, evaluation, and operational
Mapping tables for international security standard ISO/IEC 15408 (Common
While the NIST Risk Management Framework provides the pieces and parts for an
effective security program, it is aimed at government agencies focusing on the following
key components:
2.1 Multi-Tiered Risk Management
2.2 Security Control Structure
2.3 Security Control Baselines
2.4 Security Control Designations
2.5 External Service Partners
2.6 Assurance and Trustworthiness
2.7 Revisions and Extensions
3.1 Selecting Security Control Baselines
3.2 Tailoring Security Control Baselines
3.3 Creating Overlays
3.4 Document the Control Selection Process
3.5 New Development and Legacy Systems
One major issue corporate security teams will encounter when trying to base a
program on theNIST SP 800-53 Risk Management Frameworkis thatpubliclytraded
organizations are not bound by the same security assumptions and requirements as
government agencies. Government organizations are established to fulll legislated
missions and are required to collect, store, manipulate, and report sensitive data. Finally,
a large percentage of these activities in a publicly traded organizationare governed by
cost-benet analysis, boards of directors, and shareholder opinion, as opposed to govern-
ment direction and inuence.
For those looking to understand the similarities and overlaps with NIST SP 800-53 and
ISO 27001/2, there is a mapping matrix listed within the 800-53 Revision 4 document.
DOMAIN 1 Architectural Concepts and Design Requirements Domain68
Payment Card Industry Data Security Standard (PCI DSS)24
Visa, MasterCard, and American Express established the Payment Card Industry Data
Security Standard (known as the PCI DSS) as a security standard to which all organiza-
tions or merchants that accept, transmit, or store any cardholder data, regardless of size or
number of transactions, must comply.
PCI DSS was established following a number of signicant credit card breaches. PCI
DSS is a comprehensive and intensive security standard, which lists both technical and
non-technical requirements based on the number of credit card transactions for the appli-
cable entities.
Merchant Levels Based on Transactions
Table1.3 illustrates the various merchant levels based on the number of transactions.
taBLe1.3 Merchant Levels Based on Transactions
1 Any merchant—regardless of acceptance channel—processing over 6 mil-
lion Visa transactions per year. Any merchant that Visa, at its sole discretion,
determines should meet the Level 1 merchant requirements to minimize
risk to the Visa system.
2 Any merchant—regardless of acceptance channel—processing 1–6 million
Visa transactions per year.
3 Any merchant processing 20,000 to 1 million Visa e-commerce transactions
per year.
4 Any merchant processing fewer than 20,000 Visa e-commerce transactions
per year and all other merchants—regardless of acceptance channel—
processing up to 1 million Visa transactions per year.
For specic information and requirements, be sure to check with the PCI Security
Standard Council.
Merchant Requirements
All merchants, regardless of level and relevant service providers, are required to comply
with the following 12 domains/requirements:
Install and maintain a rewall conguration to protect cardholder data.
Do not use vendor-supplied defaults for system passwords and other security
System/Subsystem Product Certification 69
Protect stored cardholder data.
Encrypt transmission of cardholder data across open, public networks.
Use and regularly update antivirus software.
Develop and maintain secure systems and applications.
Restrict access to cardholder data by business need-to-know.
Assign a unique ID to each person with computer access.
Restrict physical access to cardholder data.
Track and monitor all access to network resources and cardholder data.
Regularly test security systems and processes.
Maintain a policy that addresses information security.
The 12 requirements list over 200 controls that specify required and minimum security
requirements in order for the merchants and service providers to meet their compliance
Failure to meet and satisfy the PCI DSS requirements (based on merchant level and
processing levels) can result in signicant nancial penalties, suspension of credit cards as
a payment channel, escalation to a higher merchant level, and potentially greater assur-
ances and compliance requirements in the event of a breach in which credit card details
may have be compromised or disclosed.
Since its establishment, PCI DSS has undergone a number of signicant updates,
through to the current 3.0 version.
Due to the more technical nature and more “black and white” nature of its controls,
many see PCI DSS as a reasonable and sufcient technical security standard. People
believe that if it is good enough to protect their credit card and nancial information, it
should be a good baseline for cloud security.
System/subsystem product certication is used to evaluate the security claims made
for a system and its components. While there have been several evaluation frameworks
available for use over the years such as the Trusted Computer System Evaluation Criteria
(TCSEC) developed by the United States Department of Defense, the Common Criteria
is the one that is internationally accepted and used most often.
DOMAIN 1 Architectural Concepts and Design Requirements Domain70
Common Criteria25
The Common Criteria (CC) is an international set of guidelines and specications
(ISO/IEC 15408) developed for evaluating information security products, with the view
to ensuring they meet an agreed-upon security standard for government entities and
Common Criteria Components
Ofcially, the Common Criteria is known as the “Common Criteria for Information
Technology Security Evaluation” and until 2005 was previously known as “The Trusted
Computer System Evaluation Criteria.” The Common Criteria is updated periodically.
Distinctly, the Common Criteria has two key components:
Protection Proles: Dene a standard set of security requirements for a specic
type of product, such as a rewall, IDS, or Unied Threat Management (UTM).
The Evaluation Assurance Levels (EAL): Dene how thoroughly the product is
tested. Evaluation Assurance Levels are rated using a sliding scale from 1–7, with
one being the lowest-level evaluation and seven being the highest.
The higher the level of evaluation, the more Quality Assurance (QA) tests the
product would have undergone.
note Undergoing more tests does not necessarily mean the product is more secure!
The seven Evaluation Assurance Levels (EALs) are as follows:
EAL1: Functionally Tested
EAL2: Structurally Tested
EAL3: Methodically Tested and Checked
EAL4: Methodically Designed, Tested, and Reviewed
EAL5: Semi-Formally Designed and Tested
EAL6: Semi-Formally Veried Design and Tested
EAL7: Formally Veried Design and Tested
Common Criteria Evaluation Process
The goal of Common Criteria certication is to ensure customers that the products they
are buying have been evaluated and that a vendor-neutral third party has veried the ven-
dor’s claims.
System/Subsystem Product Certification 71
To submit a product for evaluation, follow these steps:
1. The vendor must complete a Security Target (ST) description which provides an
overview of the product’s security features.
2. A certied laboratory then tests the product to evaluate how well it meets the spec-
ications dened in the Protection Prole.
3. A successful evaluation leads to an ofcial certication of the product.
Note that Common Criteria looks at certifying a product only and does not include
administrative or business processes.
FIPS 140-226
In order to maintain ongoing condentiality and integrity of relevant information and
data, encryption and cryptography can be used as a primary choice, specically in various
cloud computing deployment service types.
FIPS (Federal Information Processing Standard) 140 Publication Series was issued
by the National Institute of Standards and Technology (NIST) to coordinate the require-
ments and standards for cryptography modules covering both hardware and software
components for cloud and traditional computing environments.
The FIPS 140-2 standard provides four distinct levels of security intended to cover a
wide range of potential applications and environments with emphasis on secure design
and implementation of a cryptographic module.
Relevant specications include
Cryptographic module specication
Cryptographic module ports
Interfaces, roles, and services
Physical security
Operational environment
Cryptographic key management
Design assurance
Controls and mitigating techniques against attacks
DOMAIN 1 Architectural Concepts and Design Requirements Domain72
FIPS 140-2 Goal
The primary goal for the FIPS 140-2 standard is to accredit and distinguish secure and
well-architected cryptographic modules produced by private sector vendors who seek
to or are in the process of having their solutions and services certied for use in U.S.
Government departments and regulated industries (this includes nancial services and
healthcare) that collect, store, transfer, or share data that is deemed to be “sensitive” but
not classied (i.e., Secret/Top Secret).
Finally, when assessing the level of controls, FIPS is measured using a Level 1 to
Level 4 rating. Despite the ratings and their associated requirements, FIPS does not state
what level of certication is required by specic systems, applications, or data types.
FIPS Levels
The breakdown of the levels is as follows:
Security Level 1: The lowest level of security. In order to meet Level 1 require-
ments, basic cryptographic module requirements are specied for at least one
approved security function or approved algorithm. Encryption of a PC board
would present an example of a Level 1 rating.
Security Level 2: Enhances the required physical security mechanisms listed
within Level 1 and requires that capabilities exist to illustrate evidence of tam-
pering, including locks that are tamper-proof on perimeter and internal covers to
prevent unauthorized physical access to encryption keys.
Security Level 3: Looks to develop the basis of Level 1 and Level 2 to include pre-
venting the intruder from gaining access to information and data held within the
cryptographic module. Additionally, physical security controls required at Level
3 should move toward detecting access attempts and responding appropriately to
protect the cryptographic module.
Security Level 4: Represents the highest rating. Security Level4 provides the
highest level of security, with mechanisms providing complete protection around
the cryptographic module with the intent of detecting and responding to all unau-
thorized attempts at physical access. Upon detection, immediate zeroization of all
plaintext Critical Security Parameters (also known as CSPs but not to be confused
with cloud service providers).27 Security Level 4 undergoes rigid testing in order to
ensure its adequacy, completeness, and effectiveness.
Summary 73
All testing is performed by accredited third-party laboratories and is subject to strict
guidelines and quality standards. Upon completion of testing, all ratings are provided,
along with an overall rating on the vendor’s independent validation certicate.
From a cloud computing perspective, these requirements form a necessary and
required baseline for all U.S. Government agencies that may be looking to utilize or avail
cloud-based services. Outside of the United States, FIPS does not typically act as a driver
or a requirement; however, other governments and enterprises tend to recognize the FIPS
validation as an enabler or differentiator over other technologies that have not undergone
independent assessments and/or certication.
Cloud computing covers a wide range of topics focused on the concepts, principles,
structures, and standards used to monitor and secure assets and those controls used to
enforce various levels of condentiality, integrity, and availability across IT services
throughout the enterprise. Security practitioners focused on cloud security must use and
apply standards to ensure that the systems under their protection are maintained and sup-
ported properly. Today’s environment of highly interconnected, interdependent systems
necessitates the requirement to understand the linkage between information technology
and meeting business objectives. Information security management communicates the
risks accepted by the organization due to the currently implemented security controls
and continually works to cost effectively enhance the controls to minimize the risk to the
company’s information assets.
DOMAIN 1 Architectural Concepts and Design Requirements Domain74
1. Which of the following are attributes of cloud computing?
a. Minimal management effort and shared resources
b. High cost and unique resources
c. Rapid provisioning and slow release of resources
d. Limited access and service provider interaction
2. Which of the following are distinguishing characteristics of a Managed Service
a. Have some form of a Network Operations Center but no help desk
b. Be able to remotely monitor and manage objects for the customer and reactively
maintain these objects under management
c. Have some form of a help desk but no Network Operations Center
d. Be able to remotely monitor and manage objects for the customer and proactively
maintain these objects under management
3. Which of the following are cloud computing roles?
a. Cloud Customer and Financial Auditor
b. Cloud Provider and Backup Service Provider
c. Cloud Service Broker and User
d. Cloud Service Auditor and Object
4. Which of the following are essential characteristics of cloud computing?
(Choose two.)
a. On-demand self-service
b. Unmeasured service
c. Resource isolation
d. Broad network access
5. Which of the following are considered to be the building blocks of cloud computing?
a. Data, access control, virtualization, and services
b. Storage, networking, printing, and virtualization
c. CPU, RAM, storage, and networking
d. Data, CPU, RAM, and access control
Review Questions 75
6. When using an Infrastructure as a Service solution, what is the capability provided to
the customer?
a. To provision processing, storage, networks, and other fundamental computing
resources where the consumer is not able to deploy and run arbitrary software,
which can include operating systems and applications.
b. To provision processing, storage, networks, and other fundamental computing
resources where the provider is able to deploy and run arbitrary software, which
can include operating systems and applications.
c. To provision processing, storage, networks, and other fundamental computing
resources where the auditor is able to deploy and run arbitrary software, which
can include operating systems and applications.
d. To provision processing, storage, networks, and other fundamental computing
resources where the consumer is able to deploy and run arbitrary software, which
can include operating systems and applications.
7. When using an Infrastructure as a Service solution, what is a key benet provided to
the customer?
a. Usage is metered and priced on the basis of units consumed.
b. The ability to scale up infrastructure services based on projected usage.
c. Increased energy and cooling system efciencies.
d. Cost of ownership is transferred.
8. When using a Platform as a Service solution, what is the capability provided to the
a. To deploy onto the cloud infrastructure provider-created or acquired applications
created using programming languages, libraries, services, and tools supported by
the provider. The consumer does not manage or control the underlying cloud
infrastructure including network, servers, operating systems, or storage, but has
control over the deployed applications and possibly conguration settings for the
application-hosting environment.
b. To deploy onto the cloud infrastructure consumer-created or acquired applica-
tions created using programming languages, libraries, services, and tools sup-
ported by the provider. The provider does not manage or control the underlying
cloud infrastructure including network, servers, operating systems, or storage, but
has control over the deployed applications and possibly conguration settings for
the application-hosting environment.
DOMAIN 1 Architectural Concepts and Design Requirements Domain76
c. To deploy onto the cloud infrastructure consumer-created or acquired applica-
tions created using programming languages, libraries, services, and tools sup-
ported by the provider. The consumer does not manage or control the underlying
cloud infrastructure including network, servers, operating systems, or storage, but
has control over the deployed applications and possibly conguration settings for
the application-hosting environment.
d. To deploy onto the cloud infrastructure consumer-created or acquired appli-
cations created using programming languages, libraries, services, and tools
supported by the consumer. The consumer does not manage or control the
underlying cloud infrastructure including network, servers, operating systems, or
storage, but has control over the deployed applications and possibly conguration
settings for the application-hosting environment.
9. What is a key capability or characteristic of Platform as a Service?
a. Support for a homogenous hosting environment.
b. Ability to reduce lock-in.
c. Support for a single programming language.
d. Ability to manually scale.
10. When using a Software as a Service solution, what is the capability provided to the
a. To use the provider’s applications running on a cloud infrastructure. The appli-
cations are accessible from various client devices through either a thin client
interface, such as a web browser (e.g., web-based email), or a program interface.
The consumer does not manage or control the underlying cloud infrastructure
including network, servers, operating systems, storage, or even individual applica-
tion capabilities, with the possible exception of limited user-specic application
conguration settings.
b. To use the provider’s applications running on a cloud infrastructure. The applica-
tions are accessible from various client devices through either a thin client inter-
face, such as a web browser (e.g., web-based email), or a program interface. The
consumer does manage or control the underlying cloud infrastructure including
network, servers, operating systems, storage, or even individual application capa-
bilities, with the possible exception of limited user-specic application congura-
tion settings.
c. To use the consumer’s applications running on a cloud infrastructure. The appli-
cations are accessible from various client devices through either a thin client
interface, such as a web browser (e.g., web-based email), or a program interface.
Review Questions 77
The consumer does not manage or control the underlying cloud infrastructure
including network, servers, operating systems, storage, or even individual applica-
tion capabilities, with the possible exception of limited user-specic application
conguration settings.
d. To use the consumer’s applications running on a cloud infrastructure. The appli-
cations are accessible from various client devices through either a thin client
interface, such as a web browser (e.g., web-based email), or a program interface.
The consumer does manage or control the underlying cloud infrastructure
including network, servers, operating systems, storage, or even individual applica-
tion capabilities, with the possible exception of limited user-specic application
conguration settings.
11. What are the four cloud deployment models?
a. Public, Internal, Hybrid, and Community
b. External, Private, Hybrid, and Community
c. Public, Private, Joint, and Community
d. Public, Private, Hybrid, and Community
12. What are the six stages of the cloud secure data lifecycle?
a. Create, Use, Store, Share, Archive, and Destroy
b. Create, Store, Use, Share, Archive, and Destroy
c. Create, Share, Store, Archive, Use, and Destroy
d. Create, Archive, Use, Share, Store, and Destroy
a. Risk management frameworks
b. Access Controls
c. Audit reports
d. Software development phases
14. What are the ve Trust Services Principles?
a. Security, Availability, Processing Integrity, Condentiality, and Privacy
b. Security, Auditability, Processing Integrity, Condentiality, and Privacy
c. Security, Availability, Customer Integrity, Condentiality, and Privacy
d. Security, Availability, Processing Integrity, Condentiality, and Non-repudiation
DOMAIN 1 Architectural Concepts and Design Requirements Domain78
15. What is a security-related concern for a Platform as a Service solution?
a. Virtual machine attacks
b. Web application security
c. Data access and policies
d. System/resource isolation
1 (page 6)
3 Governance Reimagined: Organizational Design, Risk and Value Creation, by David R.
Koenig, John Wiley & Sons, Inc., page 160.
4 (page 7)
5 (page 6)
6 (page 6)
7 (page 7)
8 (page 7)
9 (page 7)
10 (page 7)
15 See the following for the October 22, 2014 announcement by NIST of the nal publi-
cation release of the roadmap:
16 See the following for the LDAP X.500 RFC:
19 See the following for more information:
Notes 79
27 In cryptography, zeroization is the practice of erasing sensitive parameters (electroni-
cally stored data, cryptographic keys, and CSPs) from a cryptographic module to prevent
their disclosure if the equipment is captured.
Cloud Data Security Domain
tHe goaL oF tHe Cloud Data Security domain is to provide you with
knowledge of the types of controls necessary to administer various levels
of confidentiality, integrity, and availability, with regard to securing data in
the cloud.
You will gain knowledge on topics of data discovery and classification
techniques; digital rights management; privacy of data; data retention, dele-
tion, and archiving; data event logging, chain of custody and non-repudiation;
and the strategic use of security information and event management.
DOMAIN 2 Cloud Data Security Domain82
After completing this domain, you will be able to:
Describe the cloud data lifecycle based on the Cloud Security Alliance (CSA) guidance
Describe the design and implementation of cloud data storage architectures with
regard to storage types, threats, and available technologies
Identify the necessary data security strategies for securing cloud data
Define the implementation processes for data discovery and classification technologies
Identify the relevant jurisdictional data protections as they relate to personable
identifiable information
Define Digital Rights Management (DRM) with regard to objectives and the tools
Identify the required data policies specific to retention, deletion, and archiving
Describe various data events and know how to design and implement processes for
auditability, traceability, and accountability
Introduction 83
Data security is a core element of cloud security (Figure2.1). Cloud service providers
will often share the responsibility for security with the customer. Roles such as the Chief
Information Security Ofcer (CISO), Chief Security Ofcer (CSO), Chief Technology
Ofcer (CTO), Enterprise Security Architect, and Network Administrator may all play a
part in providing elements of a security solution for the enterprise.
FigUre2.1 Many roles are involved in providing data security
The data security lifecycle, as introduced by the Securosis Blog and then incorpo-
rated into the Cloud Security Alliance (CSA) guidance, enables the organization to map
the different phases in the data lifecycle against the required controls that are relevant to
each phase.1
The lifecycle contains the following steps:
Map the different lifecycle phases
Integrate the different data locations and access types
Map into functions, actors, and controls
The data lifecycle guidance provides a framework to map relevant use cases for
data access, while assisting in the development of appropriate controls within each
lifecycle stage.
The lifecycle model serves as a reference and framework to provide a standardized
approach for data lifecycle and data security. Not all implementations or situations will
align fully or comprehensively.
DOMAIN 2 Cloud Data Security Domain84
According to Securosis, the data lifecycle is comprised of six phases, from creation to
destruction (Figure2.2).
FigUre2.2 The six phases of the data lifecycle
While the lifecycle is described as a linear process, data may skip certain stages or
indeed switch back and forth between the different phases:
1. Create: The generation or acquisition of new digital content, or the alteration/
updating of existing content. This phase can happen internally in the cloud or
externally, and then the data is imported into the cloud. The creation phase is the
preferred time to classify content according to its sensitivity and value to the orga-
nization. Careful classication is important because poor security controls could
be implemented if content is classied incorrectly.
2. Store: The act of committing the digital data to some sort of storage repository.
Typically occurs nearly simultaneously with creation. When storing the data, it
The Cloud Data Lifecycle Phases 85
should be protected in accordance with its classication level. Controls such as
encryption, access policy, monitoring, logging, and backups should be imple-
mented to avoid data threats. Content can be vulnerable to attackers if ACLs
(Access Control Lists) are not implemented well or les are not scanned for
threats or are classied incorrectly.
3. Use: Data is viewed, processed, or otherwise used in some sort of activity, not
including modication. Data in use is most vulnerable because it might be
transported into unsecure locations such as workstations, and in order to be
processed, it is must be unencrypted. Controls such as Data Loss Prevention
(DLP), Information Rights Management (IRM), and database and le access
monitors should be implemented in order to audit data access and prevent
unauthorized access.
4. Share: Information is made accessible to others, such as between users, to cus-
tomers, and to partners. Not all data should be shared, and not all sharing should
present a threat. But since data that is shared is no longer at the organization con-
trol, maintaining security can be difcult. Technologies such as DLP can be used
to detect unauthorized sharing, and IRM technologies can be used to maintain
control over the information.
5. Archive: Data leaving active use and entering long-term storage. Archiving data
for a long period of time can be challenging. Cost vs. availability considerations
can affect data access procedures; imagine if data is stored on a magnetic tape and
needs to be retrieved 15 years later. Will the technology still exist to read the tape?
Data placed in archive must still be protected according to its classication. Regu-
latory requirements must also be addressed and different tools and providers might
be part of this phase.
6. Destroy: The data is removed from the cloud provider. The destroy phase can
be interpreted into different technical meanings according to usage, data con-
tent, and applications used. Data destruction can mean logically erasing pointers
or permanently destroying data using physical or digital means. Consideration
should be made according to regulation, type of cloud being used (IaaS vs. SaaS),
and the classication of the data.
DOMAIN 2 Cloud Data Security Domain86
While the lifecycle does not require the data location to be specied, along with who
can access it and from where, as a Cloud Security Professional (CSP), you need to fully
understand and incorporate this into your planning in order to use the lifecycle within
the enterprise.
Data is a portable resource, capable of moving swiftly and easily between different loca-
tions, both inside and outside of the enterprise. It can be generated in the internal net-
work, be moved into the cloud for processing, and then be moved to a different provider
for backup or archival storage.
The opportunity for portions of the data to be exported or imported to different sys-
tems at alternate locations cannot be discounted or overlooked.
The Cloud Security Professional should pose the following questions alongside the
relevant lifecycle phases:
Who are the actors that potentially have access to data I need to protect?
What is/are the potential location(s) for data I have to protect?
What are the controls in each of those locations?
At what phases in each lifecycle can data move between locations?
How does data move between locations (via what channels)?
Where are these actors coming from (what locations, and are they trusted or
The traditional data lifecycle model does not specify requirements for who can access
relevant data, nor how they are able to access it (device and channels). Mobile com-
puting, the manner in which data can be accessed, and the wide variety of mechanisms
and channels for storing, processing, and transmitting data across the enterprise have all
amplied the impact of this.
Upon completion of mapping the various data phases, along with data locations and
device access, it is necessary to identify what can be done with the data (i.e., data
Functions, Actors, and Controls of the Data 87
functions) and who can access the data (i.e., the actors). Once this has been established
and understood, you need to check the controls to validate which actors have permissions
to perform the relevant functions of the data (Figure2.3).
FIGURE2.3 The actors, functions, and locations of the data
Key Data Functions
According to Securosis, the following illustrates key functions that can be performed with
data in cloud-based environments:
Access: View/access the data, including copying, le transfers, and other
exchanges of information
Process: Perform a transaction on the data. Update it, use it in a business process-
ing transaction, and so on
Store: Store the data (in a le, database, etc.)”2
Take a look at how these functions map to the data lifecycle (Figure2.4).
FIGURE2.4 Data functions mapping to the data lifecycle
Each of these functions is performed in a location by an actor (person).
DOMAIN 2 Cloud Data Security Domain88
Essentially, a control acts as a mechanism to restrict a list of possible actions down to
allowed or permitted actions. For example, encryption can be used to restrict the unautho-
rized viewing or use of data, application controls to restrict processing via authorization,
and Digital Rights Management (DRM) storage to prevent untrusted or unauthorized
parties from copying or accessing data.
To determine the necessary controls to be deployed, you must rst understand:
Function(s) of the data
Location(s) of the data
Actor(s) upon the data
Once these three items have been documented and understood, then the appropriate
controls can be designed and applied to the system in order to safeguard data and control
access to it. These controls can be of a preventative, detective (monitoring), or corrective
Process Overview
The table in Figure2.5 can be used to walk through an overview of the process.
FIGURE2.5 Process overview
Fill in the Function, Actor, and Location areas, signifying whether or not the item is
possible to carry out with a Yes or No.
A No/No designation identies items that are not available at this time within the
A Yes (possibility)/No (allowed) designation identies items you could potentially
negotiate with the organization to decide to allow at some point in the future.
A Yes/Yes designation identies items that are available and should be allowed.
You may have to negotiate with the organization to formalize a plan for deploy-
ment and use of the function in question, along with the creation of the required
policies and procedures to allow for the function’s operation.
Cloud Services, Products, and Solutions 89
Tying It Together
At this point, we are able to produce a high-level mapping of data ow, including device
access and data locations. For each location, we can determine the relevant function and
actors. Once this is mapped, we can better dene what to restrict from which actor and
by which control (Figure2.6).
FigUre2.6 Tying it together
At the core of all cloud services, products, and solutions are software tools with three
underlying pillars of functionality:
Processing data and running applications (compute servers)
Moving data (networking)
Preserving or storing data (storage)
“Cloud storage” is basically dened as data storage that is made available as a
service via a network. Products and solutions are the most common cloud storage
service-building blocks of physical storage systems. Private cloud and public services
from Software as a Service (SaaS) to Platform as a Service (PaaS) and Infrastructure as
a Service (IaaS) leverage tiered storage, including Solid State Drives (SSDs) and Hard
Disc Drives (HDDs).
Similar to traditional enterprise storage environments, cloud services and solution
providers exploit a mix of different storage technology tiers that meet different Service
Level Objective (SLO) and Service Level Agreement (SLA) requirements. For example,
using fast SSDs for dense I/O consolidation—supporting database journals and indices,
DOMAIN 2 Cloud Data Security Domain90
metadata for fast lookup, and other transactional data—enables more work to be per-
formed with less energy in a denser and more cost-effective footprint.
Using a mixture of ultra-fast SSDs along with high-capacity HDDs provides a balance
of performance and capacity to meet other service requirements with different service
cost options. With cloud services, instead of specifying what type of physical drive to buy,
cloud providers cater to that by providing various availability, cost, capacity, functionality,
and performance options to meet different SLA and SLO requirements.
Data storage has to be considered for each of the cloud service models. IaaS, SaaS, and
PaaS all need access to storage in order to provide services, but the type of storage tech-
nology used and the issues associated with each varies by service model. IaaS uses volume
and object storage, while PaaS uses structured and unstructured storage. SaaS can use the
widest array of storage types including ephemeral, raw, and long-term storage. The follow-
ing sections delve into these points in greater detail.
Infrastructure as a Service (IaaS)
Cloud infrastructure services, known as Infrastructure as a Service (IaaS), are self-service
models for accessing, monitoring, and managing remote datacenter infrastructures, such
as compute (virtualized or bare mental), storage, networking, and networking services
(e.g., rewalls).
Instead of having to purchase hardware outright, users can purchase IaaS based on
consumption. Compared with SaaS and PaaS, IaaS users are responsible for managing
applications, data, runtime, middleware, and operating systems. Providers still manage
virtualization, servers, hard drives, storage, and networking.
IaaS uses the following storage types (Figure2.7):
Volume storage: A virtual hard drive that can be attached to a virtual machine
instance and be used to host data within a le system. Volumes attached to IaaS
instances behave just like a physical drive or an array does. Examples include
VMware VMFS, Amazon EBS, RackSpace RAID, and OpenStack Cinder.
Object storage: Similar to a le share accessed via APIs or a web interface.
Examples include Amazon S3 and Rackspace cloud les.
Data Storage 91
FIGURE2.7 IaaS storage types
Platform as a Service (PaaS)
Cloud platform services, or Platform as a Service (PaaS), are used for applications and
other development while providing cloud components to software. What developers
gain with PaaS is a framework they can build upon to develop or customize applications.
PaaS makes the development, testing, and deployment of applications quick, simple, and
cost-effective. With this technology, enterprise operations or a third-party provider can
manage OSs, virtualization, servers, storage, networking, and the PaaS software itself.
Developers, however, manage the applications.
PaaS utilizes the following data storage types:
Structured: Information with a high degree of organization, such that inclusion
in a relational database is seamless and readily searchable by simple, straightfor-
ward search engine algorithms or other search operations.
Unstructured: Information that does not reside in a traditional row-column data-
base. Unstructured data les often include text and multimedia content. Examples
include email messages, word processing documents, videos, photos, audio les,
presentations, web pages, and many other kinds of business documents. Although
these sorts of les may have an internal structure, they are still considered “unstruc-
tured” because the data they contain does not t neatly in a database.
DOMAIN 2 Cloud Data Security Domain92
Software as a Service (SaaS)
Cloud application services, or Software as a Service (SaaS), use the web to deliver appli-
cations that are managed by a third-party vendor and whose interface is accessed on the
client’s side.
Many SaaS applications can be run directly from a web browser without any down-
loads or installations required, although some require small plugins. With SaaS, it is easy
for enterprises to streamline their maintenance and support because everything can be
managed by vendors: applications, runtime, data, middleware, OSs, virtualization, serv-
ers, storage, and networking. Popular SaaS offering types include email and collabora-
tion, customer relationship management, and healthcare-related applications.
SaaS utilizes the following data storage types:
Information Storage and Management: Data is entered into the system via the
web interface and stored within the SaaS application (usually a backend data-
base). This data storage utilizes databases, which in turn are installed on object or
volume storage.
Content/le storage: File-based content is stored within the application.
Other types of storage that may be utilized include
Ephemeral storage: This type of storage is relevant for IaaS instances and exists
only as long as its instance is up. It will typically be used for swap les and other
temporary storage needs and will be terminated with its instance.
Content Delivery Network (CDN): Content is stored in object storage, which is
then distributed to multiple geographically distributed nodes to improve Internet
consumption speed.
Raw storage: Raw device mapping (RDM) is an option in the VMware server vir-
tualization environment that enables a storage logical unit number (LUN) to be
directly connected to a virtual machine (VM) from the storage area network (SAN).
In Microsoft’s Hyper-V platform, this is accomplished using pass-through disks.
Long-term storage: Some vendors offer a cloud storage service tailored to the
needs of data archiving. These include features such as search, guaranteed immu-
tability, and data lifecycle management. One example of this is the HP Auton-
omy Digital Safe archiving service, which uses an on-premises appliance, which
connects to customers’ data stores via APIs and allows user to search. Digital Safe
provides read-only, WORM, legal hold, e-discovery, and all the features associated
with enterprise archiving. Its appliance carries out data deduplication prior to
transmission to the data repository.
Data Storage 93
Threats to Storage Types
Data storage is subject to the following key threats:
Unauthorized usage: In the cloud, data storage can be manipulated into unau-
thorized usage, such as by account hijacking or uploading illegal content. The
multi-tenancy of the cloud storage makes tracking unauthorized usage more
Unauthorized access: Unauthorized access can happen due to hacking, improper
permissions in a multi-tenant’s environments, or an internal cloud provider
Liability due to regulatory non-compliance: Certain controls (i.e., encryption)
might be required in order to certain regulations. Not all cloud services enable all
relevant data controls.
Denial of service (DoS) and distributed denial of service (DDoS) attacks
on storage: Availability is a strong concern for cloud storage. Without data no
instances can launch.
Corruption/modication and destruction of data: This can be caused by a wide
variety of sources: human error, hardware or software failure, events such as re or
ood, or intentional hacks. It can also affect a certain portion of the storage or the
entire array.
Data leakage/breaches: Consumers should always be aware that cloud data are
exposed to data breaches. It can be external or coming from a cloud provider
employee with storage access. Data tends to be replicated and moved in the
cloud, which increase the likelihood of a leak.
Theft or accidental loss of media: This threat applies to portable storage, but as
cloud datacenters grow and storage devices are getting smaller, there are increas-
ingly more vectors for them to experience theft or similar threats as well.
Malware attack or introduction: The goal of almost every malware is eventually
reaching the data storage.
Improper treatment or sanitization after end of use: End of use is challenging in
cloud computing since usually we cannot enforce physical destruction of media.
But the dynamic nature of data, where data is kept in different storages with multi-
ple tenants, mitigates the risk that digital remnants can be located.
DOMAIN 2 Cloud Data Security Domain94
Technologies Available to Address Threats
You need to leverage different technologies to address the varied threats that may
face the enterprise with regard to the safe storage and use of their data in the cloud
FigUre2.8 Basic approach to addressing a data threat
The circumstances of each threat will be different, and as a result, the key to success
will be your ability to understand the nature of the threat you are facing, combined with
your ability to implement the appropriate technology to mitigate the threat.
It is important to be aware of the relevant data security technologies you may need to deploy
or work with to ensure the condentiality, integrity, and availability of data in the cloud.
Potential controls and solutions can include
Data Leakage Prevention (DLP): For auditing and preventing unauthorized data
Encryption: For preventing unauthorized data viewing
Obfuscation, anonymization, tokenization, and masking: Different alternatives
for protecting data without encryption
Before working with these controls and solutions, it is important to understand how
data dispersion is used in the cloud.
Relevant Data Security Technologies 95
Data Dispersion in Cloud Storage
In order to provide high availability for data, assurance, and performance, storage appli-
cations will often use the data dispersion technique. Data dispersion is similar to a RAID
solution, but it is implemented differently. Storage blocks are replicated to multiple
physical locations across the cloud. In a private cloud, you would set up and congure
data dispersion yourself. Users of a public cloud would not have the capability to set up
and congure available to them, although their data may benet from the cloud provider
using data dispersion.
The underlying architecture of this technology involves the use of erasure coding,
which chunks a data object (think of a le with self-describing metadata) into segments.
Each segment is encrypted, cut into slices, and dispersed across an organization’s network
to reside on different hard drives and servers. If the organization loses access to one drive,
the original data can still be put back together. If the data is generally static with very
few rewrites, such as media les and archive logs, creating and distributing the data is a
one-time cost. If the data is very dynamic, the erasure codes have to be re-created and the
resulting data blocks redistributed.
Data Loss Prevention (DLP)
Data Loss Prevention (also known as Data Leakage Prevention or Data Loss Protection)
describes the controls put in place by an organization to ensure that certain types of data
(structured and unstructured) remain under organizational controls, in line with policies,
standards, and procedures.
Controls to protect data form the foundation of organizational security and enable the
organization to meet regulatory requirements and relevant legislation (i.e., EU data-pro-
tection directives, U.S. privacy act, HIPAA, and PCI-DSS). DLP technologies and
processes play important roles when building those controls. The appropriate implemen-
tation and use of DLP will reduce both security and regulatory risks for the organization.
DLP strategy presents a wide and varied set of components and controls that need
to be contextually applied by the organization, often requiring changes to the enterprise
security architecture. It is for this reason that many organizations do not adopt a “full-
blown” DLP strategy across the enterprise.
For those hybrid cloud users or those utilizing cloud-based services partially within their
organizations, it would be benecial to ensure that DLP is understood and is appropriately
structured across both cloud and non-cloud environments. Failure to do so can result in
segmented and non-standardized levels of security—leading to increased risks.
DOMAIN 2 Cloud Data Security Domain96
DLP Components
DLP consists of three components:
Discovery and classication: The rst stage of a DLP implementation and also
an ongoing and recurring process, the majority of cloud-based DLP technologies
are predominantly focused on this component. The discovery process usually
maps data in cloud storage services and databases and enables classication based
on data categories (i.e., regulated data, credit card data, public data, etc.).
Monitoring: Data usage monitoring forms the key function of DLP. Effective
DLP strategies monitor the usage of data across locations and platforms while
enabling administrators to dene one or more usage policies. The ability to mon-
itor data can be executed on gateways, servers, and storage as well as workstations
and endpoint devices. Recently, the increased adoption of external services to
assist with DLP “as a service” has increased, along with many cloud-based DLP
solutions. The monitoring application should be able to cover most sharing
options available for users (email application, portable media, and Internet brows-
ing) and alert them to policy violations.
Enforcement: Many DLP tools provide the capability to interrogate data and
compare its location, use, or transmission destination against a set of policies to
prevent data loss. If a policy violation is detected, specied relevant enforcement
actions can automatically be performed. Enforcement options can include the
ability to alert and log, block data transfers, or re-route them for additional valida-
tion, or to encrypt the data prior to leaving the organizational boundaries.
DLP Architecture
DLP tool implementations typically conform to the following topologies:
Data in Motion (DIM): Sometimes referred to as network-based or gateway
DLP. In this topology, the monitoring engine is deployed near the organizational
gateway to monitor outgoing protocols such as HTTP/HTTPS/SMTP and FTP.
The topology can be a mixture of proxy based, bridge, network tapping, or SMTP
relays. In order to scan encrypted HTTPS trafc, appropriate mechanisms to
enable SSL interception/broker are required to be integrated into the system
Data at rest (DAR): Sometimes referred to as storage-based data. In this topology,
the DLP engine is installed where the data is at rest, usually one or more storage
sub-systems, as well as le and application servers. This topology is very effective
Relevant Data Security Technologies 97
for data discovery and for tracking usage but may require integration with net-
work- or endpoint-based DLP for policy enforcement.
Data in use (DIU): Sometimes referred to as client- or endpoint-based. The
DLP application is installed on a user’s workstations and endpoint devices. This
topology offers insights into how the data is used by users, with the ability to add
protection that the network DLP may not be able to provide. The challenge with
client-based DLP is the complexity, time, and resources to implement across all
endpoint devices, often across multiple locations and signicant numbers of users.
Cloud-Based DLP Considerations
Some important considerations for cloud-based DLP include:
Data in the cloud tends to move and replicate: Whether it is between locations,
datacenters, backups, or back and forth into the organizations, the replication and
movement can present a challenge to any DLP implementation.
Administrative access for enterprise data in the cloud could be tricky: Make
sure you understand how to perform discovery and classication within cloud-
based storage.
DLP technology can affect overall performance: Network or gateway DLP,
which scans all trafc for pre-dened content, might have an effect on network
performance. Client-based DLPs scan all workstation access to data; this can have
a performance impact on the workstation’s operation. The overall impact must be
considered during testing.
Leading Practices
Start with the data discovery and classication process. Those processes are more mature
within the cloud deployments and present value to the data security process.
Cloud DLP policy should address the following:
What kind of data is permitted to be stored in the cloud?
Where can the data be stored (which jurisdictions)?
How should it be stored? Encryption and storage access consideration.
What kind of data access is permitted? Which devices and what networks? Which
applications? Which tunnel?
Under what conditions is data allowed to leave the cloud?
DOMAIN 2 Cloud Data Security Domain98
Encryption methods should be carefully examined based on the format of the data.
Format preserving encryption such as Information Rights Management (IRM) is getting
more popular in document storage applications; however, other data types may require
vendor-agnostic solutions.
When implementing restrictions or controls to block or quarantine data items, it is
essential to create procedures that will prevent business process damage due to false posi-
tive events or indeed hinder legitimate transactions or processes from being performed.
DLP can be an effective tool when planning or assessing a potential migration to
cloud applications. DLP discovery will analyze the data going to the cloud for content,
and the DLP detection engine can discover policy violations during data migration.
Encryption is an important technology to consider and use when implementing systems
that will allow for secure data storage and usage from the cloud. While having encryption
enabled on all data across the enterprise architecture would reduce the risks associated
with unauthorized data access and exposure, there are performance constraints and con-
cerns to be addressed.
It is your responsibility as a CSP to implement encryption within the enterprise in such
a way that it provides the most security benets, safeguarding the most mission-critical data,
while minimizing system performance issues as a result of the encryption.
Encryption Implementation
Encryption can be implemented within different phases of the data lifecycle (Figure2.9):
Data in motion (DIM): Technologies for encrypting data in motion are mature
and well-dened and include IPSEC or VPN, TLS/SSL, and other “wire level”
Data at rest (DAR): When the data is archived or stored, different encryption
techniques should be used. The encryption mechanism itself may well vary in the
manner in which it is deployed, dependent on the timeframe or indeed the period
for which the data will be stored. Examples of this include extended retention vs.
short-term storage, data located in a database versus a le system, and so on. This
module will discuss mostly data at rest encryption scenarios.
Data in use (DIU): Data that is being shared, processed, or viewed. This stage of
the data lifecycle is less mature than other data encryption techniques and typi-
cally focuses on IRM/DRM solutions.
Relevant Data Security Technologies 99
FigUre2.9 Encryption implementation
Sample Use Cases for Encryption
The following are some use cases for encryption:
When data moves in and out of the cloud—for processing, archiving, or shar-
ing—we will use encryption for data in motion techniques such as SSL/TLS or
VPN in order to avoid information exposure or data leakage while in motion.
Protecting data at rest such as le storage, database information, application com-
ponents, archiving, and backup applications.
Files or objects that must be protected when stored, used, or shared in the cloud.
When complying with regulations such as HIPAA and PCI-DSS, which in turn
requires relevant protection of data traversing “untrusted networks,” along with
the protection of certain data types.
Protection from third-party access via subpoena or lawful interception.
Creating enhanced or increased mechanisms for logical separation between dif-
ferent customers’ data in the cloud.
Logical destruction of data when physical destruction is not feasible or technically
Cloud Encryption Challenges
There are myriad factors inuencing encryption considerations and associated imple-
mentations in the enterprise. Using encryption should always be directly related to
business considerations, regulatory requirements, and any additional constraints that the
DOMAIN 2 Cloud Data Security Domain100
organization may have to address. Different techniques will be used based on the location
of data—whether at rest, in transit, or in use—while in the cloud.
Different options might apply when dealing with specic threats, such as pro-
tecting Personally Identiable Information (PII) or legally regulated information, or
when defending against unauthorized access and viewing from systems and platform
Encryption Challenges
The following challenges are associated with encryption:
1. The integrity of encryption is heavily dependent on control and management
of the relevant encryption keys, including how they are secured. If the cloud
provider holds the keys, then not all data threats are mitigated against, as unau-
thorized actors may gain access to the data through acquisition of the keys via a
search warrant, legal ruling, or theft and misappropriation. Equally, if the cus-
tomer is holding the encryption keys, this presents different challenges to ensure
they are protected from unauthorized usage as well as compromise.
2. Encryption can be challenging to implement effectively when a cloud provider is
required to process the encrypted data. This is true even for simple tasks such as
indexing, along with the gathering of metadata.
3. Data in the cloud is highly portable. It replicates, is copied, and is backed up
extensively, making encryption and key management challenging.
4. Multi-tenant cloud environments and the shared use of physical hardware present
challenges for the safeguarding of keys in volatile memory such as RAM caches.
5. Secure hardware for encrypting keys may not exist in cloud environments, with
software-based key storage often being more vulnerable.
6. Storage-level encryption is typically less complex but can be most easily exploited/
compromised (given sufcient time and resources). The higher you go up toward
the application level, the more challenging the complexity to deploy and imple-
ment encryption becomes. However, encryption implemented at the application
level will typically be more effective in protecting the condentiality of the rele-
vant assets or resources.
7. Encryption can negatively impact performance, especially high-performance data
processing mechanisms such as data warehouses and data cubes.
8. The nature of cloud environments typically requires us to manage more keys than
traditional environments (access keys, API keys, encryption keys, and shared keys,
among others).
Relevant Data Security Technologies 101
9. Some cloud encryption implementations require all users and service trafc to
go through an encryption engine. This can result in availability and performance
issues both to end users and to providers.
10. Throughout the data lifecycle, data can change locations, format, encryption, and
encryption keys. Using the data security lifecycle can help document and map all
those different aspects.
11. Encryption affects data availability. Encryption complicates data availability con-
trols such as backups, DR planning, and co-locations because expanding encryp-
tion into these areas increases the likelihood that keys may become compromised.
In addition, if encryption is applied incorrectly within any of these areas, the data
may become inaccessible when needed.
12. Encryption does not solve data integrity threats. Data can be encrypted and yet
be subject to tampering or le replacement attacks. In this case, supplementary
cryptographic controls such as digital signatures need to be applied, along with
non-repudiation for transaction-based activities.
Encryption Architecture
Encryption architecture is very much dependent on the goals of the encryption solutions,
along with the cloud delivery mechanism. Protecting data at rest from local compromise
or unauthorized access differs signicantly from protecting data in motion into the cloud.
Adding controls to protect the integrity and availability of data can further complicate the
Typically, the following components are associated with most encryption
The data: The data object or objects that need to be encrypted.
Encryption engine: Performs the encryption operation.
Encryption keys: All encryption is based on keys. Safe-guarding the keys is a cru-
cial activity, necessary for ensuring the ongoing integrity of the encryption imple-
mentation and its algorithms.
Data Encryption in IaaS
Keeping data private and secure is a key concern for those looking to move to the cloud.
Data encryption can provide condentiality protection for data stored in the cloud. In
IaaS, encryption encompasses both volume and object storage solutions.
DOMAIN 2 Cloud Data Security Domain102
Basic Storage-Level Encryption
Where storage-level encryption is utilized, the encryption engine is located on the storage
management level, with the keys usually held/stored/retained by the cloud provider. The
engine will encrypt data written to the storage and decrypt it when exiting the storage
(i.e., for use).
This type of encryption is relevant to both object and volume storage, but it will only
protect from hardware theft or loss. It will not protect from cloud provider administrator
access or any unauthorized access coming from the layers above the storage.
Volume Storage Encryption
Volume storage encryption requires that the encrypted data reside on volume storage.
This is typically done through an encrypted container, which is mapped as a folder or
Instance-based encryption allows access to data only through the volume operating
system and therefore provides protection from:
Physical loss or theft
External administrator(s) accessing the storage
Snapshots and storage-level backups being taken and removed from the system
Volume storage encryption will not provide protection against any access made
through the instance, i.e., an attack that is manipulating or operating within the applica-
tion running on the instance.
There are two methods that can be used to implement volume storage encryption:
Instance-based encryption: When instance-based encryption is used, the encryp-
tion engine is located on the instance itself. Keys can be guarded locally but
should be managed external to the instance.
Proxy-based encryption: When proxy-based encryption is used, the encryption
engine is running on a proxy instance or appliance. The proxy instance is a secure
machine that will handle all cryptographic actions, including key management
and storage. The proxy will map the data on the volume storage while providing
access to the instances. Keys can be stored on the proxy or via the external key
storage (recommended), with the proxy providing the key exchanges and required
safeguarding of keys in memory.
Relevant Data Security Technologies 103
Object Storage Encryption
The majority of object storage services will offer server-side storage-level encryption as
described previously. This kind of encryption offers limited effectiveness, with the rec-
ommendation for external encryption mechanisms to be encrypting the data prior to its
arrival within the cloud environments.
Potential external mechanisms include
File-level encryption: Such as Information Rights Management (IRM) or Digital
Rights Management (DRM) solutions, both of which can be very effective when
used in conjunction with le hosting and sharing services that typically rely on
object storage. The encryption engine is commonly implemented at the client
side and will preserve the format of the original le.
Application-level encryption: The encryption engine resides in the application
that is utilizing the object storage. It can be integrated into the application com-
ponent or by a proxy that is responsible for encrypting the data before going to the
cloud. The proxy can be implemented on the customer gateway or as a service
residing at the external provider.
Database Encryption
For database encryption, the following options should be understood:
File-level encryption: Database servers typically reside on volume storage. For
this deployment, we are encrypting the volume or folder of the database, with the
encryption engine and keys residing on the instances attached to the volume.
External le system encryption will protect from media theft, lost backups, and
external attack but will not protect against attacks with access to the application
layer, the instances OS, or the database itself.
Transparent encryption: Many database management systems contain the ability
to encrypt the entire database or specic portions, such as tables. The encryption
engine resides within the DB, and it is transparent to the application. Keys usually
reside within the instance, although processing and managing them may also be
ofoad to an external Key Management System (KMS). This encryption can pro-
vide effective protection from media theft, backup system intrusions, and certain
database and application-level attacks.
Application-level encryption: In application-level encryption, the encryption
engine resides at the application that is utilizing the database.
DOMAIN 2 Cloud Data Security Domain104
Application encryption can act as a robust mechanism to protect against a wide range
of threats, such as compromised administrative accounts along with other database and
application-level attacks. Since the data is encrypted before reaching the database, it
is challenging to perform indexing, searches, and metadata collection. Encrypting at
the application layer can be challenging, based on the expertise requirements for cryp-
tographic development and integration.
Key Management
Key management is one of the most challenging components of any encryption imple-
mentation. Even though new standards such as Key Management Interoperability Pro-
tocol (KMIP) are emerging, safe-guarding keys and appropriate management of keys are
still the most complicated tasks you will need to engage in when planning cloud data
Common challenges with key management are
Access to the keys: Leading practices coupled with regulatory requirements may
set specic criteria for key access, along with restricting or not permitting access to
keys by Cloud Service Provider employees or personnel.
Key storage: Secure storage for the keys is essential to safeguarding the data. In
traditional “in house” environments, keys were able to be stored in secure dedi-
cated hardware. This may not always be possible in cloud environments.
Backup and replication: The nature of the cloud results in data backups and rep-
lication across a number of different formats. This can impact the ability for long-
and short-term key management to be maintained and managed effectively.
Key Management Considerations
Considerations when planning key management include
Random number generation should be conducted as a trusted process.
Throughout the lifecycle, cryptographic keys should never be transmitted in the
clear and always remain in a “trusted” environment.
When considering key escrow or key management “as a service,” carefully plan to
take into account all relevant laws, regulations, and jurisdictional requirements.
Lack of access to the encryption keys will result in lack of access to the data. This
should be considered when discussing condentiality threats versus availability threats.
Where possible, key management functions should be conducted separately from
the cloud provider in order to enforce separation of duties and force collusion to
occur if unauthorized data access is attempted.
Relevant Data Security Technologies 105
Key Storage in the Cloud
Key storage in the cloud is typically implemented using one or more of the following
Internally managed: In this method, the keys are stored on the virtual machine
or application component that is also acting as the encryption engine. This type
of key management is typically used in storage-level encryption, internal database
encryption, or backup application encryption. This approach can be helpful for
mitigating against the risks associated with lost media.
Externally managed: In this method, keys are maintained separate from the
encryption engine and data. They can be on the same cloud platform, internally
within the organization, or on a different cloud. The actual storage can be a sepa-
rate instance (hardened especially for this specic task) or on a hardware security
module (HSM). When implementing external key storage, consider how the key
management system is integrated with the encryption engine and how the entire
lifecycle of key creation through to retirement is managed.
Managed by a third party: This is when key escrow services are provided by a
trusted third party. Key management providers use specically developed secure
infrastructure and integration services for key management. You must evaluate
any third-party key storage services provider that may be contracted by the organi-
zation to ensure that the risks of allowing a third party to hold encryption keys is
well understood and documented.
Key Management in Software Environments
Typically, Cloud Service Providers protect keys using software-based solutions in order to
avoid the additional cost and overhead of hardware-based security models.
Software-based key management solutions do not meet the physical security require-
ments specied in the National Institute of Standards and Technology (NIST) Federal Infor-
mation Processing Standards Publication FIPS 140-2 or 140-3 specications.3 The ability
for software to provide evidence of tampering is unlikely. The lack of FIPS certication for
encryption may be an issue for U.S. Federal Government agencies and other organizations.
Masking, Obfuscation, Anonymization, and Tokenization
The need to provide condentiality protection for data in cloud environments is a serious
concern for organizations. The ability to use encryption is not always a realistic option for a
variety of reasons including performance, cost, and technical abilities. As a result, additional
mechanisms need to be employed to ensure that data condentiality can be achieved.
Masking, obfuscation, anonymization, and tokenization can be used in this regard.
DOMAIN 2 Cloud Data Security Domain106
Data Masking/Data Obfuscation
Data masking or data obfuscation is the process of hiding, replacing, or omitting sensitive
information from a specic dataset.
Data masking is typically used to protect specic datasets such as PII or commercially
sensitive data or in order to comply with certain regulations such as HIPAA or PCI-DSS.
Data masking or obfuscation is also widely used for test platforms where suitable test
data is not available. Both techniques are typically applied when migrating tests or devel-
opment environments to the cloud or when protecting production environments from
threats such as data exposure by insiders or outsiders.
Common approaches to data masking include:
Random Substitution: The value is replaced (or appended) with a random value.
Algorithmic Substitution: The value is replaced (or appended) with an algorithm-
generated value (this typically allows for two-way substitution).
Shufe: Shufes different values from the dataset. Usually from the same
Masking: Uses specic characters to hide certain parts of the data. Usually applies
to credit cards data formats: XXXX XXXX XX65 5432.
Deletion: Simply uses a null value or deletes the data.
The primary methods of masking data are
Static: In static masking, a new copy of the data is created with the masked val-
ues. Static masking is typically efcient when creating clean non-production
Dynamic: Dynamic masking, sometimes referred to as “on-the-y” masking, adds
a layer of masking between the application and the database. The masking layer
is responsible for masking the information in the database “on the y” when the
presentation layer accesses it. This type of masking is efcient when protecting
production environments. It can hide the full credit card number from customer
service representatives, but the data remains available for processing.
Data Anonymization
Direct identiers and indirect identiers form two primary components for identication
of individuals, users, or indeed personal information.
Relevant Data Security Technologies 107
Direct identiers are elds that uniquely identify the subject (usually name, address,
etc.) and are usually referred to as Personal Identiable Information (PII). Masking solu-
tions are typically used to protect direct identiers.
Indirect identiers typically consist of demographic or socioeconomic information,
dates, or events. While each standalone indirect identier cannot identify the individual,
the risk is that combining a number of indirect identiers together with external data can
result in exposing the subject of the information. For example, imagine a scenario where
users were able to combine search engine data, coupled with online streaming recom-
mendations to tie back posts and recommendations to individual users on a website.
Anonymization is the process of removing the indirect identiers in order to prevent
data analysis tools or other intelligent mechanisms from collating or pulling data from
multiple sources to identify individual or sensitive information. The process of ano-
nymization is similar to masking and includes identifying the relevant information to
anonymize and choosing a relevant method for obscuring the data.
The challenge with indirect identiers is the ability for this type of data to be inte-
grated in free text elds that tend to be less structured than direct identiers, thus compli-
cating the process.
Tokenization is the process of substituting a sensitive data element with a non-sensitive
equivalent, referred to as a token. The token is usually a collection of random values with
the shape and form of the original data placeholder and mapped back to the original data
by the tokenization application or solution.
Tokenization is not encryption and presents different challenges and different ben-
ets. Encryption is using a key to obfuscate data, while tokenization removes the data
entirely from the database, replacing it with a mechanism to identify and access the
Tokenization is used to safeguard the sensitive data in a secure, protected, or regu-
lated environment.
Tokenization can be implemented internally where there is a need to secure sensitive
data centrally or externally using a tokenization service.
Tokenization can assist with:
Complying with regulations or laws
Reducing the cost of compliance
Mitigating risks of storing sensitive data and reducing attack vectors on that data
DOMAIN 2 Cloud Data Security Domain108
The basic tokenization architecture involves six steps (Figure2.10).
FIGURE2.10 Basic tokenization architecture
Keep the following tokenization and cloud considerations in mind:
When using tokenization as a service, it is imperative to ensure the provider
and solution’s ability to protect your data. Note that you cannot outsource
When using tokenization as a service, special attention should be paid to the pro-
cess of authenticating the application when storing or retrieving the sensitive data.
Where external tokenization is used, appropriate encryption of communications
should be applied to data in motion.
As always, evaluate your compliance requirements before considering a cloud-
based tokenization solution. You need to weigh the risks of having to interact with
different jurisdictions and different compliance requirements.
Application of Security Strategy Technologies 109
When applying security strategies, it is important to consider the whole picture. Tech-
nologies may have dependencies or cost implications, and the larger organizational goals
should be considered (e.g. time of storage vs. encryption needs).
Table2.1 shows the steps that you should consider when planning for data gover-
nance in the cloud.
taBLe2.1 Data Security Strategies
Understand data type Regulated data, PII, business or commercial data, collabora-
tive data
Understand data structure and
Structured, unstructured data and file types
Understand the cloud service
IaaS, PaaS, SaaS
Understand the cloud storage
Object storage, volume storage, database storage
Understand cloud provider data
residency offering
On which geographic location is the data stored?
Where is it moved when on backup media and in the event of
disaster recovery/failover or business continuity?
Who has access to it?
Plan data discovery and
Watermark, tag, or index all files and location
Define data ownership Define roles, entitlement, and access controls according to
data type and user permissions
Plan protection of data controls Use of encryption or encryption alternatives (tokenization)
Definition of data in motion encryption
Protection of data controls also include backup and restore,
DR planning, secure disposal, and so on
Plan for ongoing monitoring Periodic data extraction for backup
Periodic backup and restore testing
Ongoing event monitoring—audit data access events, detect
malicious attempts, scan application level vulnerabilities
Periodic audits
DOMAIN 2 Cloud Data Security Domain110
It often seems that the cloud and the technologies that make it possible are evolving in
many directions all at once. It can be hard to keep up with all of the new and innovative
technology solutions that are being implemented across the cloud landscape. Some
examples of these exciting technologies, bit splitting and homomorphic encryption, are
discussed in the following sections.